Channel hangs, probably not closing at the right place - go

I'm trying to learn Go while writing a small program. The program should parse a PATH recursivelys as efficient and fast as possible and output the full filename (with the path included) and the sha256 file hash of the file.
If the file hashing generates fails, I wanna keep the error and add it to the string (at the hash position).
The result should return a string on the console like:
fileXYZ||hash
Unfortunately, the programs hangs at some point. I guess some of my channels are not closing properly and waiting indefinitely for input. I've been trying for quite some time to fix the problem, but without success.
Does anyone have an idea why the output hangs? Many many thx in advance, any input/advice for a Go newcomer is welcome too ;-).
(I wrote separate functions as I wanna add additional features after having fixed this issue.)
Thanks a lot!
Didier
Here is the code:
import (
"crypto/sha256"
"encoding/hex"
"flag"
"fmt"
"io"
"log"
"os"
"path/filepath"
"time"
)
func main() {
pathParam := flag.String("path", ".", "Enter Filesystem Path to list folders")
flag.Parse()
start := time.Now()
run(*pathParam)
elapsed := time.Since(start)
log.Printf("Time elapsed: %v", elapsed)
}
func run(path string) {
chashes := make(chan string, 50)
cfiles := make(chan string)
go func() {
readfs(path, cfiles)
defer close(cfiles)
}()
go func() {
generateHash(cfiles, chashes)
}()
defer close(chashes)
for hash := range chashes {
fmt.Println(hash)
}
}
func readfs(path string, cfiles chan string) {
files, err := os.ReadDir(path)
if err != nil {
log.Fatalln(err)
}
for _, file := range files {
filename := filepath.Join(path, file.Name())
if file.IsDir() {
readfs(filename, cfiles)
continue
} else {
cfiles <- filename
}
}
}
func generateHash(cfiles chan string, chashes chan string) {
for filename := range cfiles {
go func(filename string) {
var checksum string
var oError bool = false
file, err := os.Open(filename)
if err != nil {
oError = true
errorMsg := "ERROR: " + err.Error()
log.Println(errorMsg)
checksum = errorMsg
}
defer file.Close()
if !oError {
hash := sha256.New()
if _, err := io.Copy(hash, file); err != nil {
errorMsg := "ERROR: " + err.Error()
log.Println(errorMsg)
checksum = errorMsg
}
if len(checksum) == 0 {
checksum = hex.EncodeToString(hash.Sum(nil))
}
}
chashes <- filename + "||" + checksum
}(filename)
} //for files
}

The following loop hangs because chashes is not closed.
for hash := range chashes {
fmt.Println(hash)
}
Fix by closing chashes after all the hashers are completed. Use a sync.WaitGroup to wait for the hashers to complete.
func generateHash(cfiles chan string, chashes chan string) {
var wg sync.WaitGroup
for filename := range cfiles {
wg.Add(1)
go func(filename string) {
defer wg.Done()
var checksum string
var oError bool = false
file, err := os.Open(filename)
if err != nil {
oError = true
errorMsg := "ERROR: " + err.Error()
log.Println(errorMsg)
checksum = errorMsg
}
defer file.Close()
if !oError {
hash := sha256.New()
if _, err := io.Copy(hash, file); err != nil {
errorMsg := "ERROR: " + err.Error()
log.Println(errorMsg)
checksum = errorMsg
}
if len(checksum) == 0 {
checksum = hex.EncodeToString(hash.Sum(nil))
}
}
chashes <- filename + "||" + checksum
}(filename)
} //for files
// Wait for the hashers to complete.
wg.Wait()
// Close the channel to cause main() to break
// out of for range on chashes.
close(chashes)
}
Remove defer close(chashes) from run().
Run an example on the Go playground.

Related

Panic: Send on a closed channel when running go routine in foor loop

I'm attempting to make a concurrent version of grep. The program walks directories/subdirectories and returns back any matching strings to a provided pattern.
I am attempting to run the file searching concurrently, once I have all the files to search (see searchPaths function). Originally I was getting:
fatal error: all goroutines are asleep - deadlock
Until I added the close(out) at the end of searchPaths, to which it now returns:
Panic: Send on a closed channel when running go routine in foor loop
I am attempting to implement something similar to:
https://go.dev/blog/pipelines#fan-out-fan-in
Is it the case that I am closing the channel at the wrong point?
package main
import (
"fmt"
"io/fs"
"io/ioutil"
"log"
"os"
"path/filepath"
"strings"
"sync"
)
type SearchResult struct {
line string
lineNumber int
}
type Display struct {
filePath string
SearchResult
}
var wg sync.WaitGroup
func (d Display) PrettyPrint() {
fmt.Printf("Line Number: %v\nFilePath: %v\nLine: %v\n\n", d.lineNumber, d.filePath, d.line)
}
func searchLine(pattern string, line string, lineNumber int) (SearchResult, bool) {
if strings.Contains(line, pattern) {
return SearchResult{lineNumber: lineNumber + 1, line: line}, true
}
return SearchResult{}, false
}
func splitIntoLines(file string) []string {
lines := strings.Split(file, "\n")
return lines
}
func fileFromPath(path string) string {
fileContent, err := ioutil.ReadFile(path)
if err != nil {
log.Fatal(err)
}
return string(fileContent)
}
func getRecursiveFilePaths(inputDir string) []string {
var paths []string
err := filepath.Walk(inputDir, func(path string, info fs.FileInfo, err error) error {
if err != nil {
fmt.Printf("prevent panic by handling failure accessing a path %q: %v\n", path, err)
return err
}
if !info.IsDir() {
paths = append(paths, path)
}
return nil
})
if err != nil {
fmt.Printf("Error walking the path %q: %v\n", inputDir, err)
}
return paths
}
func searchPaths(paths []string, pattern string) <-chan Display {
out := make(chan Display)
for _, path := range paths {
wg.Add(1)
go func() {
defer wg.Done()
for _, display := range searchFile(path, pattern) {
out <- display
}
}()
}
close(out)
return out
}
func searchFile(path string, pattern string) []Display {
var out []Display
input := fileFromPath(path)
lines := splitIntoLines(input)
for index, line := range lines {
if searchResult, ok := searchLine(pattern, line, index); ok {
out = append(out, Display{path, searchResult})
}
}
return out
}
func main() {
pattern := os.Args[1]
dirPath := os.Args[2]
paths := getRecursiveFilePaths(dirPath)
out := searchPaths(paths, pattern)
wg.Wait()
for d := range out {
d.PrettyPrint()
}
}
2 main issues with this code were
you need to close the channel only after wg.Wait() completes. you can do this in a seperate goroutine as shown below
as the path var in searchPaths func is reassigned multiple times as part of the for loop logic, it is not a good practice to use that var directly in the goroutines, a better approach will be to pass it as an argument.
package main
import (
"fmt"
"io/fs"
"io/ioutil"
"log"
"os"
"path/filepath"
"strings"
"sync"
)
type SearchResult struct {
line string
lineNumber int
}
type Display struct {
filePath string
SearchResult
}
var wg sync.WaitGroup
func (d Display) PrettyPrint() {
fmt.Printf("Line Number: %v\nFilePath: %v\nLine: %v\n\n", d.lineNumber, d.filePath, d.line)
}
func searchLine(pattern string, line string, lineNumber int) (SearchResult, bool) {
if strings.Contains(line, pattern) {
return SearchResult{lineNumber: lineNumber + 1, line: line}, true
}
return SearchResult{}, false
}
func splitIntoLines(file string) []string {
lines := strings.Split(file, "\n")
return lines
}
func fileFromPath(path string) string {
fileContent, err := ioutil.ReadFile(path)
if err != nil {
log.Fatal(err)
}
return string(fileContent)
}
func getRecursiveFilePaths(inputDir string) []string {
var paths []string
err := filepath.Walk(inputDir, func(path string, info fs.FileInfo, err error) error {
if err != nil {
fmt.Printf("prevent panic by handling failure accessing a path %q: %v\n", path, err)
return err
}
if !info.IsDir() {
paths = append(paths, path)
}
return nil
})
if err != nil {
fmt.Printf("Error walking the path %q: %v\n", inputDir, err)
}
return paths
}
func searchPaths(paths []string, pattern string) chan Display {
out := make(chan Display)
for _, path := range paths {
wg.Add(1)
go func(p string, w *sync.WaitGroup) { // as path var is changing value in the loop, it's better to provide it as a argument in goroutine
defer w.Done()
for _, display := range searchFile(p, pattern) {
out <- display
}
}(path, &wg)
}
return out
}
func searchFile(path string, pattern string) []Display {
var out []Display
input := fileFromPath(path)
lines := splitIntoLines(input)
for index, line := range lines {
if searchResult, ok := searchLine(pattern, line, index); ok {
out = append(out, Display{path, searchResult})
}
}
return out
}
func main() {
pattern := os.Args[1]
dirPath := os.Args[2]
paths := getRecursiveFilePaths(dirPath)
out := searchPaths(paths, pattern)
go func(){
wg.Wait() // waiting before closing the channel
close(out)
}()
count := 0
for d := range out {
fmt.Println(count)
d.PrettyPrint()
count += 1
}
}

Unable to loop through golang dynamic channels

I want to loop through the menu's options. However, it stops at the first option, since the select without "default:" is blocking and it does not know more options will appear dynamically.
Bellow is the broken code:
package main
import (
"bytes"
"fmt"
"io/ioutil"
"log"
"os/exec"
"strings"
"time"
"github.com/getlantern/systray"
"gopkg.in/yaml.v3"
)
var menuItensPtr []*systray.MenuItem
var config map[string]string
var commands []string
func main() {
config = readconfig()
systray.Run(onReady, onExit)
}
func onReady() {
systray.SetIcon(getIcon("assets/menu.ico"))
menuItensPtr = make([]*systray.MenuItem,0)
commands = make([]string,0)
for k, v := range config {
menuItemPtr := systray.AddMenuItem(k, k)
menuItensPtr = append(menuItensPtr, menuItemPtr)
commands = append(commands, v)
}
systray.AddSeparator()
// mQuit := systray.AddMenuItem("Quit", "Quits this app")
go func() {
for {
systray.SetTitle("My tray menu")
systray.SetTooltip("https://github.com/evandrojr/my-tray-menu")
time.Sleep(1 * time.Second)
}
}()
go func() {
for{
for i, menuItenPtr := range menuItensPtr {
select {
/// EXECUTION GETS STUCK HERE!!!!!!!
case<-menuItenPtr.ClickedCh:
execute(commands[i])
}
}
// select {
// case <-mQuit.ClickedCh:
// systray.Quit()
// return
// // default:
// }
}
}()
}
func onExit() {
// Cleaning stuff will go here.
}
func getIcon(s string) []byte {
b, err := ioutil.ReadFile(s)
if err != nil {
fmt.Print(err)
}
return b
}
func execute(commands string){
command_array:= strings.Split(commands, " ")
command:=""
command, command_array = command_array[0], command_array[1:]
cmd := exec.Command(command, command_array ...)
var out bytes.Buffer
cmd.Stdout = &out
err := cmd.Run()
if err != nil {
log.Fatal(err)
}
// fmt.Printf("Output %s\n", out.String())
}
func readconfig() map[string]string{
yfile, err := ioutil.ReadFile("my-tray-menu.yaml")
if err != nil {
log.Fatal(err)
}
data := make(map[string]string)
err2 := yaml.Unmarshal(yfile, &data)
if err2 != nil {
log.Fatal(err2)
}
for k, v := range data {
fmt.Printf("%s -> %s\n", k, v)
}
return data
}
Bellow is the ugly workaround that works:
package main
import (
"bytes"
"fmt"
"io/ioutil"
"log"
"os"
"os/exec"
"path/filepath"
"strings"
"time"
"github.com/getlantern/systray"
"gopkg.in/yaml.v3"
)
var menuItensPtr []*systray.MenuItem
var config map[string]string
var commands []string
var labels []string
var programPath string
func main() {
setProgramPath()
config = readconfig()
time.Sleep(1 * time.Second)
systray.Run(onReady, onExit)
}
func onReady() {
systray.SetIcon(getIcon(filepath.Join(programPath,"assets/menu.ico")))
menuItensPtr = make([]*systray.MenuItem, 0)
i := 0
op0 := systray.AddMenuItem(labels[i], commands[i])
i++
op1 := systray.AddMenuItem(labels[i], commands[i])
i++
op2 := systray.AddMenuItem(labels[i], commands[i])
i++
op3 := systray.AddMenuItem(labels[i], commands[i])
i++
systray.AddSeparator()
mQuit := systray.AddMenuItem("Quit", "Quits this app")
go func() {
for {
systray.SetTitle("My tray menu")
systray.SetTooltip("https://github.com/evandrojr/my-tray-menu")
time.Sleep(1 * time.Second)
}
}()
go func() {
for {
select {
// HERE DOES NOT GET STUCK!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
case <-op0.ClickedCh:
execute(commands[0])
case <-op1.ClickedCh:
execute(commands[1])
case <-op2.ClickedCh:
execute(commands[2])
case <-op3.ClickedCh:
execute(commands[3])
case <-mQuit.ClickedCh:
systray.Quit()
return
}
}
}()
}
func onExit() {
// Cleaning stuff will go here.
}
func getIcon(s string) []byte {
b, err := ioutil.ReadFile(s)
if err != nil {
fmt.Print(err)
}
return b
}
func setProgramPath(){
ex, err := os.Executable()
if err != nil {
panic(err)
}
programPath = filepath.Dir(ex)
if err != nil {
fmt.Println(err)
os.Exit(1)
}
}
func execute(commands string) {
command_array := strings.Split(commands, " ")
command := ""
command, command_array = command_array[0], command_array[1:]
cmd := exec.Command(command, command_array...)
var out bytes.Buffer
cmd.Stdout = &out
err := cmd.Run()
if err != nil {
log.Fatal(err)
}
fmt.Printf("Output %s\n", out.String())
}
func readconfig() map[string]string {
yfile, err := ioutil.ReadFile(filepath.Join(programPath,"my-tray-menu.yaml"))
if err != nil {
log.Fatal(err)
}
data := make(map[string]string)
err2 := yaml.Unmarshal(yfile, &data)
if err2 != nil {
log.Fatal(err2)
}
labels = make([]string, 0)
commands = make([]string, 0)
for k, v := range data {
labels = append(labels, k)
commands = append(commands, v)
fmt.Printf("%s -> %s\n", k, v)
}
fmt.Print(len(labels))
return data
}
Full source code here:
https://github.com/evandrojr/my-tray-menu
select "chooses which of a set of possible send or receive operations will proceed". The spec sets out how this choice is made:
If one or more of the communications can proceed, a single one that can proceed is chosen via a uniform pseudo-random selection. Otherwise, if there is a default case, that case is chosen. If there is no default case, the "select" statement blocks until at least one of the communications can proceed.
Your working example:
select {
case <-op0.ClickedCh:
execute(commands[0])
case <-op1.ClickedCh:
execute(commands[1])
// ...
}
uses select successfully to choose between one of the offered options. However if you pass a single option e.g.
select {
case<-menuItenPtr.ClickedCh:
execute(commands[i])
}
}
The select will block until <-menuItenPtr.ClickedCh is ready to proceed (e.g. something is received). This is effectively the same as not using a select:
<-menuItenPtr.ClickedCh:
execute(commands[i])
The result you were expecting can be achieved by providing a default option:
select {
case<-menuItenPtr.ClickedCh:
execute(commands[i])
}
default:
}
As per the quote from the spec above the default option will be chosen if none of the other options can proceed. While this may work it's not a very good solution because you effectively end up with:
for {
// Check if event happened (not blocking)
}
This will tie up CPU time unnecessarily as it continually loops checking for events. A better solution would be to start a goroutine to monitor each channel:
for i, menuItenPtr := range menuItensPtr {
go func(c chan struct{}, cmd string) {
for range c { execute(cmd) }
}(menuItenPtr.ClickedCh, commands[i])
}
// Start another goroutine to handle quit
The above will probably work but does lead to the possibility that execute will be called concurrently (which might cause issues if your code is not threadsafe). One way around this is to use the "fan in" pattern (as suggested by #kostix and in the Rob Pike video suggested by #John); something like:
cmdChan := make(chan int)
for i, menuItenPtr := range menuItensPtr {
go func(c chan struct{}, cmd string) {
for range c { cmdChan <- cmd }
}(menuItenPtr.ClickedCh, commands[i])
}
go func() {
for {
select {
case cmd := <- cmdChan:
execute(cmd) // Handle command
case <-mQuit.ClickedCh:
systray.Quit()
return
}
}
}()
note: all code above entered directly into the question so please treat as pseudo code!

Best approach to getting results out of goroutines

I have two functions that I cannot change (see first() and second() below). They are returning some data and errors (the output data is different, but in the examples below I use (string, error) for simplicity)
I would like to run them in separate goroutines - my approach:
package main
import (
"fmt"
"os"
)
func first(name string) (string, error) {
if name == "" {
return "", fmt.Errorf("empty name is not allowed")
}
fmt.Println("processing first")
return fmt.Sprintf("First hello %s", name), nil
}
func second(name string) (string, error) {
if name == "" {
return "", fmt.Errorf("empty name is not allowed")
}
fmt.Println("processing second")
return fmt.Sprintf("Second hello %s", name), nil
}
func main() {
firstCh := make(chan string)
secondCh := make(chan string)
go func() {
defer close(firstCh)
res, err := first("one")
if err != nil {
fmt.Printf("Failed to run first: %v\n", err)
}
firstCh <- res
}()
go func() {
defer close(secondCh)
res, err := second("two")
if err != nil {
fmt.Printf("Failed to run second: %v\n", err)
}
secondCh <- res
}()
resultsOne := <-firstCh
resultsTwo := <-secondCh
// It's important for my app to do error checking and stop if errors exist.
if resultsOne == "" || resultsTwo == "" {
fmt.Println("There was an ERROR")
os.Exit(1)
}
fmt.Println("ONE:", resultsOne)
fmt.Println("TWO:", resultsTwo)
}
I believe one caveat is that resultsOne := <- firstCh blocks until first goroutine finishes, but I don't care too much about this.
Can you please confirm that my approach is good? What other approaches would be better in my situation?
The example looks mostly good. A couple improvements are:
declaring your channels as buffered
firstCh := make(chan string, 1)
secondCh := make(chan string, 1)
With unbuffered channels, send operations block (until someone receives). If your goroutine #2 is much faster than the first, it will have to wait until the first finishes as well, since you receive in sequence:
resultsOne := <-firstCh // waiting on this one first
resultsTwo := <-secondCh // sender blocked because the main thread hasn't reached this point
use "golang.org/x/sync/errgroup".Group. The program will feel "less native" but it dispenses you from managing channels by hand — which trades, in a non-contrived setting, for sync'ing writes on the results:
func main() {
var (
resultsOne string
resultsTwo string
)
g := errgroup.Group{}
g.Go(func() error {
res, err := first("one")
if err != nil {
return err
}
resultsOne = res
return nil
})
g.Go(func() error {
res, err := second("two")
if err != nil {
return err
}
resultsTwo = res
return nil
})
err := g.Wait()
// ... handle err

How to close a channel

I try to adapt this example:
https://gobyexample.com/worker-pools
But I don't know how to stop the channel because program don't exit at the end of the channel loop.
Can you explain how to exit the program?
package main
import (
"github.com/SlyMarbo/rss"
"bufio"
"fmt"
"log"
"os"
)
func readLines(path string) ([]string, error) {
file, err := os.Open(path)
if err != nil {
return nil, err
}
defer file.Close()
var lines []string
scanner := bufio.NewScanner(file)
for scanner.Scan() {
lines = append(lines, scanner.Text())
}
return lines, scanner.Err()
}
func worker(id int, jobs <-chan string, results chan<- string) {
for url := range jobs {
fmt.Println("worker", id, "processing job", url)
feed, err := rss.Fetch(url)
if err != nil {
fmt.Println("Error on: ", url)
continue
}
borne := 0
for _, value := range feed.Items {
if borne < 5 {
results <- value.Link
borne = borne +1
} else {
continue
}
}
}
}
func main() {
jobs := make(chan string)
results := make(chan string)
for w := 1; w <= 16; w++ {
go worker(w, jobs, results)
}
urls, err := readLines("flux.txt")
if err != nil {
log.Fatalf("readLines: %s", err)
}
for _, url := range urls {
jobs <- url
}
close(jobs)
// it seems program runs over...
for msg := range results {
fmt.Println(msg)
}
}
The flux.txt is a flat text file like :
http://blog.case.edu/news/feed.atom
...
The problem is that, in the example you are referring to, the worker pool reads from results 9 times:
for a := 1; a <= 9; a++ {
<-results
}
Your program, on the other hand, does a range loop over the results which has a different semantics in go. The range operator does not stop until the channel is closed.
for msg := range results {
fmt.Println(msg)
}
To fix your problem you'd need to close the results channel. However, if you just call close(results) before the for loop, you most probably will
get a panic, because the workers might be writing on results.
To fix this problem, you need to add another channel to be notified when all the workers are done. You can do this either using a sync.WaitGroup or :
const (
workers = 16
)
func main() {
jobs := make(chan string, 100)
results := make(chan string, 100)
var wg sync.WaitGroup
for w := 0; w < workers; w++ {
go func() {
wg.Add(1)
defer wg.Done()
worker(w, jobs, results)
}()
}
urls, err := readLines("flux.txt")
if err != nil {
log.Fatalf("readLines: %s", err)
}
for _, url := range urls {
jobs <- url
}
close(jobs)
wg.Wait()
close(results)
// it seems program runs over...
for msg := range results {
fmt.Println(msg)
}
}
Or a done channel:
package main
import (
"bufio"
"fmt"
"github.com/SlyMarbo/rss"
"log"
"os"
)
func readLines(path string) ([]string, error) {
file, err := os.Open(path)
if err != nil {
return nil, err
}
defer file.Close()
var lines []string
scanner := bufio.NewScanner(file)
for scanner.Scan() {
lines = append(lines, scanner.Text())
}
return lines, scanner.Err()
}
func worker(id int, jobs <-chan string, results chan<- string, done chan struct{}) {
for url := range jobs {
fmt.Println("worker", id, "processing job", url)
feed, err := rss.Fetch(url)
if err != nil {
fmt.Println("Error on: ", url)
continue
}
borne := 0
for _, value := range feed.Items {
if borne < 5 {
results <- value.Link
borne = borne + 1
} else {
continue
}
}
}
close(done)
}
const (
workers = 16
)
func main() {
jobs := make(chan string, 100)
results := make(chan string, 100)
dones := make([]chan struct{}, workers)
for w := 0; w < workers; w++ {
dones[w] = make(chan struct{})
go worker(w, jobs, results, dones[w])
}
urls, err := readLines("flux.txt")
if err != nil {
log.Fatalf("readLines: %s", err)
}
for _, url := range urls {
jobs <- url
}
close(jobs)
for _, done := range dones {
<-done
}
close(results)
// it seems program runs over...
for msg := range results {
fmt.Println(msg)
}
}

Any better way to keep track of goroutine responses?

I'm trying to get my head around goroutines. I've created a simple program that performs the same search in parallel across multiple search engines. At the moment to keep track of the number of responses, I count the number I've received. It seems a bit amateur though.
Is there a better way of knowing when I've received a response from all of the goroutines in the following code?
package main
import (
"fmt"
"net/http"
"log"
)
type Query struct {
url string
status string
}
func search (url string, out chan Query) {
fmt.Printf("Fetching URL %s\n", url)
resp, err := http.Get(url)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
out <- Query{url, resp.Status}
}
func main() {
searchTerm := "carrot"
fmt.Println("Hello world! Searching for ", searchTerm)
searchEngines := []string{
"http://www.bing.co.uk/?q=",
"http://www.google.co.uk/?q=",
"http://www.yahoo.co.uk/?q="}
out := make(chan Query)
for i := 0; i < len(searchEngines); i++ {
go search(searchEngines[i] + searchTerm, out)
}
progress := 0
for {
// is there a better way of doing this step?
if progress >= len(searchEngines) {
break
}
fmt.Println("Polling...")
query := <-out
fmt.Printf("Status from %s was %s\n", query.url, query.status)
progress++
}
}
Please use sync.WaitGroup, there is an example in the pkg doc
searchEngines := []string{
"http://www.bing.co.uk/?q=",
"http://www.google.co.uk/?q=",
"http://www.yahoo.co.uk/?q="}
var wg sync.WaitGroup
out := make(chan Query)
for i := 0; i < len(searchEngines); i++ {
wg.Add(1)
go func (url string) {
defer wg.Done()
fmt.Printf("Fetching URL %s\n", url)
resp, err := http.Get(url)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
out <- Query{url, resp.Status}
}(searchEngines[i] + searchTerm)
}
wg.Wait()

Resources