Confusion regarding channel directions and blocking in Go - go

In a function definition, if a channel is an argument without a direction, does it have to send or receive something?
func makeRequest(url string, ch chan<- string, results chan<- string) {
start := time.Now()
resp, err := http.Get(url)
defer resp.Body.Close()
if err != nil {
fmt.Printf("%v", err)
}
resp, err = http.Post(url, "text/plain", bytes.NewBuffer([]byte("Hey")))
defer resp.Body.Close()
secs := time.Since(start).Seconds()
if err != nil {
fmt.Printf("%v", err)
}
// Cannot move past this.
ch <- fmt.Sprintf("%f", secs)
results <- <- ch
}
func MakeRequestHelper(url string, ch chan string, results chan string, iterations int) {
for i := 0; i < iterations; i++ {
makeRequest(url, ch, results)
}
for i := 0; i < iterations; i++ {
fmt.Println(<-ch)
}
}
func main() {
args := os.Args[1:]
threadString := args[0]
iterationString := args[1]
url := args[2]
threads, err := strconv.Atoi(threadString)
if err != nil {
fmt.Printf("%v", err)
}
iterations, err := strconv.Atoi(iterationString)
if err != nil {
fmt.Printf("%v", err)
}
channels := make([]chan string, 100)
for i := range channels {
channels[i] = make(chan string)
}
// results aggregate all the things received by channels in all goroutines
results := make(chan string, iterations*threads)
for i := 0; i < threads; i++ {
go MakeRequestHelper(url, channels[i], results, iterations)
}
resultSlice := make([]string, threads*iterations)
for i := 0; i < threads*iterations; i++ {
resultSlice[i] = <-results
}
}
In the above code,
ch <- or <-results
seems to be blocking every goroutine that executes makeRequest.
I am new to concurrency model of Go. I understand that sending to and receiving from a channel blocks but find it difficult what is blocking what in this code.

I'm not really sure that you are doing... It seems really convoluted. I suggest you read up on how to use channels.
https://tour.golang.org/concurrency/2
That being said you have so much going on in your code that it was much easier to just gut it to something a bit simpler. (It can be simplified further). I left comments to understand the code.
package main
import (
"fmt"
"io/ioutil"
"log"
"net/http"
"sync"
"time"
)
// using structs is a nice way to organize your code
type Worker struct {
wg sync.WaitGroup
semaphore chan struct{}
result chan Result
client http.Client
}
// group returns so that you don't have to send to many channels
type Result struct {
duration float64
results string
}
// closing your channels will stop the for loop in main
func (w *Worker) Close() {
close(w.semaphore)
close(w.result)
}
func (w *Worker) MakeRequest(url string) {
// a semaphore is a simple way to rate limit the amount of goroutines running at any single point of time
// google them, Go uses them often
w.semaphore <- struct{}{}
defer func() {
w.wg.Done()
<-w.semaphore
}()
start := time.Now()
resp, err := w.client.Get(url)
if err != nil {
log.Println("error", err)
return
}
defer resp.Body.Close()
// don't have any examples where I need to also POST anything but the point should be made
// resp, err = http.Post(url, "text/plain", bytes.NewBuffer([]byte("Hey")))
// if err != nil {
// log.Println("error", err)
// return
// }
// defer resp.Body.Close()
secs := time.Since(start).Seconds()
b, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Println("error", err)
return
}
w.result <- Result{duration: secs, results: string(b)}
}
func main() {
urls := []string{"https://facebook.com/", "https://twitter.com/", "https://google.com/", "https://youtube.com/", "https://linkedin.com/", "https://wordpress.org/",
"https://instagram.com/", "https://pinterest.com/", "https://wikipedia.org/", "https://wordpress.com/", "https://blogspot.com/", "https://apple.com/",
}
workerNumber := 5
worker := Worker{
semaphore: make(chan struct{}, workerNumber),
result: make(chan Result),
client: http.Client{Timeout: 5 * time.Second},
}
// use sync groups to allow your code to wait for
// all your goroutines to finish
for _, url := range urls {
worker.wg.Add(1)
go worker.MakeRequest(url)
}
// by declaring wait and close as a seperate goroutine
// I can get to the for loop below and iterate on the results
// in a non blocking fashion
go func() {
worker.wg.Wait()
worker.Close()
}()
// do something with the results channel
for res := range worker.result {
fmt.Printf("Request took %2.f seconds.\nResults: %s\n\n", res.duration, res.results)
}
}

The channels in channels are nil (no make is executed; you make the slice but not the channels), so any send or receive will block. I'm not sure exactly what you're trying to do here, but that's the basic problem.
See https://golang.org/doc/effective_go.html#channels for an explanation of how channels work.

Related

Closing a channel after a timeout

I have a small Go program that makes a number of requests every tick (1 second). I'm attempting to make these requests concurrently. I want to count and log the number of successful requests made in one tick, and then move on. If requests don't complete in time, I don't want to block the main ticker.
The code below achieves this, but I don't believe I'm closing the channel in concurrentReqs correctly. As any requests that miss the deadline still log with the previous tick. I also believe the ticker in the main function will block waiting for the concurrentReqs to finish. I tried moving the close(ch) inside of the timeout case in my select, but this results in a 'send on closed channel' error.
My understanding is that using contexts with a deadline (probably set in my main ticker) might be a solution for this, but I'm struggling to wrap my head around them, and I wonder if there's something else I can try.
Note: the timeout in concurrentReqs is deliberately low, since I'm testing locally.
package main
import (
"fmt"
"time"
"net/http"
)
type response struct {
num int
statusCode int
requestDuration time.Duration
}
func singleRequest(url string, i int, tick int) response {
start := time.Now()
client := http.Client{ Timeout: 100 * time.Millisecond }
resp, _ := client.Get(url)
fmt.Printf("%d: %d\n", tick, i)
defer resp.Body.Close()
return response{statusCode: int(resp.StatusCode), requestDuration: time.Since(start)}
}
func concurrentReqs(url string, reqsPerTick int, tick int) (results []response){
ch := make(chan response, reqsPerTick)
timeout := time.After(20 * time.Millisecond) // deliberately low
results = make([]response, 0)
for i := 0; i < reqsPerTick; i++ {
go func(i int, t int) {
ch <- singleRequest(url, i, tick)
}(i, tick)
}
for i := 0; i < reqsPerTick; i++ {
select {
case response := <- ch:
results = append(results, response)
case <- timeout:
return
}
}
close(ch)
return
}
func main() {
var url string = "http://end-point.svc/req"
c := time.Tick(1 * time.Second)
for next := range c {
things := concurrentReqs(url, 100, next.Second())
fmt.Printf("%s: Successful Reqs - %d\n", int(next.Second()), len(things))
}
}
I suggest to use context with timeout for cancellation and timing out. Also I think using wait group and mutex protected result writing helps simplicity here by eliminating second loop.
package main
import (
"context"
"fmt"
"log"
"net/http"
"sync"
"time"
)
type response struct {
num int
statusCode int
requestDuration time.Duration
}
func singleRequest(ctx context.Context, url string, i int, tick int) (response, error) {
start := time.Now()
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil {
return response{requestDuration: time.Since(start)}, err
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
return response{requestDuration: time.Since(start)}, err
}
fmt.Printf("%d: %d\n", tick, i)
defer resp.Body.Close()
return response{statusCode: int(resp.StatusCode), requestDuration: time.Since(start)}, nil
}
func concurrentReqs(url string, reqsPerTick int, tick int) (results []response) {
mu := sync.Mutex{}
results = make([]response, 0)
ctx, cancel := context.WithTimeout(context.Background(), 20*time.Millisecond)
defer cancel()
wg := sync.WaitGroup{}
for i := 0; i < reqsPerTick; i++ {
wg.Add(1)
go func(i int, t int) {
defer wg.Done()
response, err := singleRequest(ctx, url, i, tick)
if err != nil {
log.Print(err)
return
}
mu.Lock()
results = append(results, response)
mu.Unlock()
}(i, tick)
}
wg.Wait()
return results
}
func main() {
var url string = "http://end-point.svc/req"
c := time.Tick(1 * time.Second)
for next := range c {
// You may want to wrap this in a goroutine to make sure tick is not skipped.
// Otherwise if concurrentReqs takes more than a tick time for whatever reason, a tick will be skipped.
things := concurrentReqs(url, 100, next.Second())
fmt.Printf("%s: Successful Reqs - %d\n", int(next.Second()), len(things))
}
}

Concurrency issues with crawler

I try to build concurrent crawler based on Tour and some others SO answers regarding that. What I have currently is below but I think I have here two subtle issues.
Sometimes I get 16 urls in response and sometimes 17 (debug print in main). I know it because when I even change WriteToSlice to Read then in Read sometimes 'Read: end, counter = ' is never reached and it's always when I get 16 urls.
I have troubles with err channel, I get no messages in this channel, even when I run my main Crawl method with address like www.golang.org so without valid schema error should be send via err channel
Concurrency is really difficult topic, help and advice will be appreciated
package main
import (
"fmt"
"net/http"
"sync"
"golang.org/x/net/html"
)
type urlCache struct {
urls map[string]struct{}
sync.Mutex
}
func (v *urlCache) Set(url string) bool {
v.Lock()
defer v.Unlock()
_, exist := v.urls[url]
v.urls[url] = struct{}{}
return !exist
}
func newURLCache() *urlCache {
return &urlCache{
urls: make(map[string]struct{}),
}
}
type results struct {
data chan string
err chan error
}
func newResults() *results {
return &results{
data: make(chan string, 1),
err: make(chan error, 1),
}
}
func (r *results) close() {
close(r.data)
close(r.err)
}
func (r *results) WriteToSlice(s *[]string) {
for {
select {
case data := <-r.data:
*s = append(*s, data)
case err := <-r.err:
fmt.Println("e ", err)
}
}
}
func (r *results) Read() {
fmt.Println("Read: start")
counter := 0
for c := range r.data {
fmt.Println(c)
counter++
}
fmt.Println("Read: end, counter = ", counter)
}
func crawl(url string, depth int, wg *sync.WaitGroup, cache *urlCache, res *results) {
defer wg.Done()
if depth == 0 || !cache.Set(url) {
return
}
response, err := http.Get(url)
if err != nil {
res.err <- err
return
}
defer response.Body.Close()
node, err := html.Parse(response.Body)
if err != nil {
res.err <- err
return
}
urls := grablUrls(response, node)
res.data <- url
for _, url := range urls {
wg.Add(1)
go crawl(url, depth-1, wg, cache, res)
}
}
func grablUrls(resp *http.Response, node *html.Node) []string {
var f func(*html.Node) []string
var results []string
f = func(n *html.Node) []string {
if n.Type == html.ElementNode && n.Data == "a" {
for _, a := range n.Attr {
if a.Key != "href" {
continue
}
link, err := resp.Request.URL.Parse(a.Val)
if err != nil {
continue
}
results = append(results, link.String())
}
}
for c := n.FirstChild; c != nil; c = c.NextSibling {
f(c)
}
return results
}
res := f(node)
return res
}
// Crawl ...
func Crawl(url string, depth int) []string {
wg := &sync.WaitGroup{}
output := &[]string{}
visited := newURLCache()
results := newResults()
defer results.close()
wg.Add(1)
go crawl(url, depth, wg, visited, results)
go results.WriteToSlice(output)
// go results.Read()
wg.Wait()
return *output
}
func main() {
r := Crawl("https://www.golang.org", 2)
// r := Crawl("www.golang.org", 2) // no schema, error should be generated and send via err
fmt.Println(len(r))
}
Both your questions 1 and 2 are a result of the same bug.
In Crawl() you are not waiting for this go routine to finish: go results.WriteToSlice(output). On the last crawl() function, the wait group is released, the output is returned and printed before the WriteToSlice function finishes with the data and err channel. So what has happened is this:
crawl() finishes, placing data in results.data and results.err.
Waitgroup wait() unblocks, causing main() to print the length of the result []string
WriteToSlice adds the last data (or err) item to the channel
You need to return from Crawl() not only when the data is done being written to the channel, but also when the channel is done being read in it's entirety (including the buffer). A good way to do this is close channels when you are sure that you are done with them. By organizing your code this way, you can block on the go routine that is draining the channels, and instead of using the wait group to release to main, you wait until the channels are 100% done.
You can see this gobyexample https://gobyexample.com/closing-channels. Remember that when you close a channel, the channel can still be used until the last item is taken. So you can close a buffered channel, and the reader will still get all the items that were queued in the channel.
There is some code structure that can change to make this cleaner, but here is a quick way to fix your program. Change Crawl to block on WriteToSlice. Close the data channel when the crawl function finishes, and wait for WriteToSlice to finish.
// Crawl ...
func Crawl(url string, depth int) []string {
wg := &sync.WaitGroup{}
output := &[]string{}
visited := newURLCache()
results := newResults()
go func() {
wg.Add(1)
go crawl(url, depth, wg, visited, results)
wg.Wait()
// All data is written, this makes `WriteToSlice()` unblock
close(results.data)
}()
// This will block until results.data is closed
results.WriteToSlice(output)
close(results.err)
return *output
}
Then on write to slice, you have to check for the closed channel to exit the for loop:
func (r *results) WriteToSlice(s *[]string) {
for {
select {
case data, open := <-r.data:
if !open {
return // All data done
}
*s = append(*s, data)
case err := <-r.err:
fmt.Println("e ", err)
}
}
}
Here is the full code: https://play.golang.org/p/GBpGk-lzrhd (it won't work in the playground)

Concurrently process a lot of files and upload to S3 in Go

I'm migrating a lot of files that are currently stored in a relational database to amazon S3. I'm using go because I had heard about the concurrency of it, but I'm getting very low throughput. I'm new to go so I'm probably not doing it in the best way possible.
This is what I have at the moment
type Attachment struct {
BinaryData []byte `db:"BinaryData"`
CreatedAt time.Time `db:"CreatedDT"`
Id int `db:"Id"`
}
func main() {
connString := os.Getenv("CONNECTION_STRING")
log.SetFlags(log.Ltime)
db, err := sqlx.Connect("sqlserver", connString)
if err != nil {
panic(err)
}
log.Print("Connected to database")
sql := "SELECT TOP 1000 Id,CreatedDT, BinaryData FROM Attachment"
attachmentsDb := []Attachment{}
err = db.Select(&attachmentsDb, sql)
if err != nil {
log.Fatal(err)
}
session, err := session.NewSession(&aws.Config{
Region: aws.String("eu-west-1"),
})
if err != nil {
log.Fatal(err)
return
}
svc := s3.New(session)
wg := &sync.WaitGroup{}
for _, att := range attachmentsDb {
done := make(chan error)
go func(wg *sync.WaitGroup, att Attachment, out chan error) {
wg.Add(1)
err := <-saveAttachment(&att, svc)
if err == nil {
log.Printf("CV Written %d", att.Id)
}
wg.Done()
out<-err
}(wg, att, done)
<-done
}
wg.Wait()
//close(in)
defer db.Close()
}
func saveAttachment(att *Attachment, svc *s3.S3 )<-chan error {
out := make(chan error)
bucket := os.Getenv("BUCKET")
go func() {
defer close(out)
key := getKey(att)
input := &s3.PutObjectInput{Bucket: &bucket,
Key: &key,
Body: bytes.NewReader(att.BinaryData),
}
_, err := svc.PutObject(input)
if err != nil {
//log.Fatal(err)
log.Printf("Error uploading CV %d error %v", att.Id, err)
}
out <- err
}()
return out
}
func getKey(att *Attachment) string {
return fmt.Sprintf("%s/%d", os.Getenv("KEY"), att.Id)
}
These loops will executes sequentially because in every loop, it waits for result from channel done so there aren't any benifit from running multiple goroutines. And no need to create a new goroutine in func saveAttachment(), because you already create it in the loops.
func main() {
//....
svc := s3.New(session)
wg := &sync.WaitGroup{}
for _, att := range attachmentsDb {
done := make(chan error)
//New goroutine
go func(wg *sync.WaitGroup, att Attachment, out chan error) {
wg.Add(1)
//Already in a goroutine now, but in func saveAttachment() will create a new goroutine?
err := <-saveAttachment(&att, svc) //There is a goroutine created in this func
if err == nil {
log.Printf("CV Written %d", att.Id)
}
wg.Done()
out<-err
}(wg, att, done)
<-done //This will block until receives the result, after that a new loop countinues
}
}
func saveAttachment(att *Attachment, svc *s3.S3 )<-chan error {
out := make(chan error)
bucket := os.Getenv("BUCKET")
//Why new goroutine?
go func() {
defer close(out)
key := getKey(att)
input := &s3.PutObjectInput{Bucket: &bucket,
Key: &key,
Body: bytes.NewReader(att.BinaryData),
}
_, err := svc.PutObject(input)
if err != nil {
//log.Fatal(err)
log.Printf("Error uploading CV %d error %v", att.Id, err)
}
out <- err
}()
return out
}
If you want to upload in parallel, don't do that. You can quickly fix it like this
func main() {
//....
svc := s3.New(session)
wg := &sync.WaitGroup{}
//Number of goroutines = number of attachments
for _, att := range attachmentsDb {
wg.Add(1)
//One goroutine to uploads for each Attachment
go func(wg *sync.WaitGroup, att Attachment) {
err := saveAtt(&att, svc)
if err == nil {
log.Printf("CV Written %d", att.Id)
}
wg.Done()
}(wg, att)
//No blocking after created a goroutine, loops countines to create new goroutine
}
wg.Wait()
fmt.Println("done")
}
//This func will be executed in goroutine, so no need to create a goroutine inside it
func saveAtt(att *Attachment, svc *s3.S3) error {
bucket := os.Getenv("BUCKET")
key := getKey(att)
input := &s3.PutObjectInput{Bucket: &bucket,
Key: &key,
Body: bytes.NewReader(att.BinaryData),
}
_, err := svc.PutObject(input)
if err != nil {
log.Printf("Error uploading CV %d error %v", att.Id, err)
}
return err
}
But this approach isn't good when there are so many attachments beacause number of goroutines = number of attachments. In this case, you will need a goroutine pool so you can limit number of goroutines to run.
Warining!!!, This is just an example to show goroutine pool logic, you need to implement it by your way
//....
//Create a attachment queue
queue := make(chan *Attachment) //Or use buffered channel: queue := make(chan *Attachment, bufferedSize)
//Send all attachment to queue
go func() {
for _, att := range attachmentsDb {
queue <- &att
}
}()
//....
//Create a goroutine pool
svc := s3.New(session)
wg := &sync.WaitGroup{}
//Use this as const
workerCount := 5
//Number of goroutines = Number of workerCount
for i := 1; i <= workerCount; i++ {
//New goroutine
go func() {
//Get attachment from queue to upload. When the queue channel is empty, this code will blocks
for att := range queue {
err := saveAtt(att, svc)
if err == nil {
log.Printf("CV Written %d", att.Id)
}
}
}()
}
//....
//Warning!!! You need to call close channel only WHEN all attachments was uploaded, this code just show how you can end the goroutine pool
//Just close queue channel when all attachments was uploaded, all upload goroutines will end (because of `att := range queue`)
close(queue)
//....

How to close a channel

I try to adapt this example:
https://gobyexample.com/worker-pools
But I don't know how to stop the channel because program don't exit at the end of the channel loop.
Can you explain how to exit the program?
package main
import (
"github.com/SlyMarbo/rss"
"bufio"
"fmt"
"log"
"os"
)
func readLines(path string) ([]string, error) {
file, err := os.Open(path)
if err != nil {
return nil, err
}
defer file.Close()
var lines []string
scanner := bufio.NewScanner(file)
for scanner.Scan() {
lines = append(lines, scanner.Text())
}
return lines, scanner.Err()
}
func worker(id int, jobs <-chan string, results chan<- string) {
for url := range jobs {
fmt.Println("worker", id, "processing job", url)
feed, err := rss.Fetch(url)
if err != nil {
fmt.Println("Error on: ", url)
continue
}
borne := 0
for _, value := range feed.Items {
if borne < 5 {
results <- value.Link
borne = borne +1
} else {
continue
}
}
}
}
func main() {
jobs := make(chan string)
results := make(chan string)
for w := 1; w <= 16; w++ {
go worker(w, jobs, results)
}
urls, err := readLines("flux.txt")
if err != nil {
log.Fatalf("readLines: %s", err)
}
for _, url := range urls {
jobs <- url
}
close(jobs)
// it seems program runs over...
for msg := range results {
fmt.Println(msg)
}
}
The flux.txt is a flat text file like :
http://blog.case.edu/news/feed.atom
...
The problem is that, in the example you are referring to, the worker pool reads from results 9 times:
for a := 1; a <= 9; a++ {
<-results
}
Your program, on the other hand, does a range loop over the results which has a different semantics in go. The range operator does not stop until the channel is closed.
for msg := range results {
fmt.Println(msg)
}
To fix your problem you'd need to close the results channel. However, if you just call close(results) before the for loop, you most probably will
get a panic, because the workers might be writing on results.
To fix this problem, you need to add another channel to be notified when all the workers are done. You can do this either using a sync.WaitGroup or :
const (
workers = 16
)
func main() {
jobs := make(chan string, 100)
results := make(chan string, 100)
var wg sync.WaitGroup
for w := 0; w < workers; w++ {
go func() {
wg.Add(1)
defer wg.Done()
worker(w, jobs, results)
}()
}
urls, err := readLines("flux.txt")
if err != nil {
log.Fatalf("readLines: %s", err)
}
for _, url := range urls {
jobs <- url
}
close(jobs)
wg.Wait()
close(results)
// it seems program runs over...
for msg := range results {
fmt.Println(msg)
}
}
Or a done channel:
package main
import (
"bufio"
"fmt"
"github.com/SlyMarbo/rss"
"log"
"os"
)
func readLines(path string) ([]string, error) {
file, err := os.Open(path)
if err != nil {
return nil, err
}
defer file.Close()
var lines []string
scanner := bufio.NewScanner(file)
for scanner.Scan() {
lines = append(lines, scanner.Text())
}
return lines, scanner.Err()
}
func worker(id int, jobs <-chan string, results chan<- string, done chan struct{}) {
for url := range jobs {
fmt.Println("worker", id, "processing job", url)
feed, err := rss.Fetch(url)
if err != nil {
fmt.Println("Error on: ", url)
continue
}
borne := 0
for _, value := range feed.Items {
if borne < 5 {
results <- value.Link
borne = borne + 1
} else {
continue
}
}
}
close(done)
}
const (
workers = 16
)
func main() {
jobs := make(chan string, 100)
results := make(chan string, 100)
dones := make([]chan struct{}, workers)
for w := 0; w < workers; w++ {
dones[w] = make(chan struct{})
go worker(w, jobs, results, dones[w])
}
urls, err := readLines("flux.txt")
if err != nil {
log.Fatalf("readLines: %s", err)
}
for _, url := range urls {
jobs <- url
}
close(jobs)
for _, done := range dones {
<-done
}
close(results)
// it seems program runs over...
for msg := range results {
fmt.Println(msg)
}
}

Any better way to keep track of goroutine responses?

I'm trying to get my head around goroutines. I've created a simple program that performs the same search in parallel across multiple search engines. At the moment to keep track of the number of responses, I count the number I've received. It seems a bit amateur though.
Is there a better way of knowing when I've received a response from all of the goroutines in the following code?
package main
import (
"fmt"
"net/http"
"log"
)
type Query struct {
url string
status string
}
func search (url string, out chan Query) {
fmt.Printf("Fetching URL %s\n", url)
resp, err := http.Get(url)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
out <- Query{url, resp.Status}
}
func main() {
searchTerm := "carrot"
fmt.Println("Hello world! Searching for ", searchTerm)
searchEngines := []string{
"http://www.bing.co.uk/?q=",
"http://www.google.co.uk/?q=",
"http://www.yahoo.co.uk/?q="}
out := make(chan Query)
for i := 0; i < len(searchEngines); i++ {
go search(searchEngines[i] + searchTerm, out)
}
progress := 0
for {
// is there a better way of doing this step?
if progress >= len(searchEngines) {
break
}
fmt.Println("Polling...")
query := <-out
fmt.Printf("Status from %s was %s\n", query.url, query.status)
progress++
}
}
Please use sync.WaitGroup, there is an example in the pkg doc
searchEngines := []string{
"http://www.bing.co.uk/?q=",
"http://www.google.co.uk/?q=",
"http://www.yahoo.co.uk/?q="}
var wg sync.WaitGroup
out := make(chan Query)
for i := 0; i < len(searchEngines); i++ {
wg.Add(1)
go func (url string) {
defer wg.Done()
fmt.Printf("Fetching URL %s\n", url)
resp, err := http.Get(url)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
out <- Query{url, resp.Status}
}(searchEngines[i] + searchTerm)
}
wg.Wait()

Resources