goroutine didn't take effect in Crawl example of 'A Tour of Go' - go

As the hits mentioned in Crawl example of 'A Tour of Go', I modified the Crawl function and just wonder why the 'go Crawl' failed to spawn another thread as only one url was found printed out.
Is there anything wrong with my modification?
List my modification as below,
// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher) {
// TODO: Fetch URLs in parallel.
// TODO: Don't fetch the same URL twice.
// This implementation doesn't do either:
if depth <= 0 {
fmt.Printf("depth <= 0 return")
return
}
body, urls, err := fetcher.Fetch(url)
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q\n", url, body)
crawled.mux.Lock()
crawled.c[url]++
crawled.mux.Unlock()
for _, u := range urls {
//crawled.mux.Lock()
if cnt, ok := crawled.c[u]; ok {
cnt++
} else {
fmt.Println("go ...", u)
go Crawl(u, depth-1, fetcher)
}
//crawled.mux.Unlock()
//Crawl(u, depth-1, fetcher)
}
return
}
type crawledUrl struct {
c map[string]int
mux sync.Mutex
}
var crawled = crawledUrl{c: make(map[string]int)}

In your program, you have no any synchronized tool for your go routines.
So the behavior of this code is undefined. Perhaps main go thread will end soon.
Please remember that the main go routine will never block to wait other go routines for termination, only if you explicitly use some kind of util to synchronize the execution of go routines.
Such as channels or useful sync utils.
Let me help to give a version.
type fetchState struct {
mu sync.Mutex
fetched map[string]bool
}
func (f *fetchState) CheckAndMark(url string) bool {
defer f.mu.Unlock()
f.mu.Lock()
if f.fetched[url] {
return true
}
f.fetched[url] = true
return false
}
func mkFetchState() *fetchState {
f := &fetchState{}
f.fetched = make(map[string]bool)
return f
}
func CrawlConcurrentMutex(url string, fetcher Fetcher, f *fetchState) {
if f.CheckAndMark(url) {
return
}
body, urls, err := fetcher.Fetch(url)
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q\n", url, body)
var done sync.WaitGroup
for _, u := range urls {
done.Add(1)
go func(u string) {
defer done.Done()
CrawlConcurrentMutex(u, fetcher, f)
}(u) // Without the u argument there is a race
}
done.Wait()
return
}
Please pay attention to the usage of sync.WaitGroup, please refer the doc and you can understand the whole story.

Related

How to prioritize goroutines

I want to call two endpoints at the same time (A and B). But if I got a response 200 from both I need to use the response from A otherwise use B response.
If B returns first I need to wait for A, in other words, I must use A whenever A returns 200.
Can you guys help me with the pattern?
Thank you
Wait for a result from A. If the result is not good, then wait from a result from B. Use a buffered channel for the B result so that the sender does not block when A is good.
In the following snippet, fnA() and fnB() functions that issue requests to the endpoints, consume the response and cleanup. I assume that the result is a []byte, but it could be the result of decoding JSON or something else. Here's an example for fnA:
func fnA() ([]byte, error) {
r, err := http.Get("http://example.com/a")
if err != nil {
return nil, err
}
defer r.Body.Close() // <-- Important: close the response body!
if r.StatusCode != 200 {
return nil, errors.New("bad response")
}
return ioutil.ReadAll(r.Body)
}
Define a type to hold the result and error.
type response struct {
result []byte
err error
}
With those preliminaries done, here's how to prioritize A over B.
a := make(chan response)
go func() {
result, err := fnA()
a <- response{result, err}
}()
b := make(chan response, 1) // Size > 0 is important!
go func() {
result, err := fnB()
b <- response{result, err}
}()
resp := <-a
if resp.err != nil {
resp = <-b
if resp.err != nil {
// handle error. A and B both failed.
}
}
result := resp.result
If the application does not execute code concurrently with A and B, then there's no need to use a goroutine for A:
b := make(chan response, 1) // Size > 0 is important!
go func() {
result, err := fnB()
b <- response{result, err}
}()
result, err := fnA()
if err != nil {
resp = <-b
if resp.err != nil {
// handle error. A and B both failed.
}
result = resp.result
}
I'm suggesting you to use something like this, this is a bulky solution, but there you can start more than two endpoints for you needs.
func endpointPriorityTest() {
const (
sourceA = "a"
sourceB = "b"
sourceC = "c"
)
type endpointResponse struct {
source string
response *http.Response
error
}
epResponseChan := make(chan *endpointResponse)
endpointsMap := map[string]string{
sourceA: "https://jsonplaceholder.typicode.com/posts/1",
sourceB: "https://jsonplaceholder.typicode.com/posts/10",
sourceC: "https://jsonplaceholder.typicode.com/posts/100",
}
for source, endpointURL := range endpointsMap {
source := source
endpointURL := endpointURL
go func(respChan chan<- *endpointResponse) {
// You can add a delay so that the response from A takes longer than from B
// and look to the result map
// if source == sourceA {
// time.Sleep(time.Second)
// }
resp, err := http.Get(endpointURL)
respChan <- &endpointResponse{
source: source,
response: resp,
error: err,
}
}(epResponseChan)
}
respCache := make(map[string]*http.Response)
// Reading endpointURL responses from chan
for epResp := range epResponseChan {
// Skips failed requests
if epResp.error != nil {
continue
}
// Save successful response to cache map
respCache[epResp.source] = epResp.response
// Interrupt reading channel if we've got an response from source A
if epResp.source == sourceA {
break
}
}
fmt.Println("result map: ", respCache)
// Now we can use data from cache map
// resp, ok :=respCache[sourceA]
// if ok{
// ...
// }
}
#Zombo 's answer has the correct logic flow. Piggybacking off this, I would suggest one addition: leveraging the context package.
Basically, any potentially blocking tasks should use context.Context to allow the call-chain to perform more efficient clean-up in the event of early cancelation.
context.Context also can be leveraged, in your case, to abort the B call early if the A call succeeds:
func failoverResult(ctx context.Context) *http.Response {
// wrap the (parent) context
ctx, cancel := context.WithCancel(ctx)
// if we return early i.e. if `fnA()` completes first
// this will "cancel" `fnB()`'s request.
defer cancel()
b := make(chan *http.Response, 1)
go func() {
b <- fnB(ctx)
}()
resp := fnA(ctx)
if resp.StatusCode != 200 {
resp = <-b
}
return resp
}
fnA (and fnB) would look something like this:
func fnA(ctx context.Context) (resp *http.Response) {
req, _ := http.NewRequestWithContext(ctx, "GET", aUrl)
resp, _ = http.DefaultClient.Do(req) // TODO: check errors
return
}
Normally in golang, channel are used for communicating between goroutines.
You can orchestrate your scenario with following sample code.
basically you pass channel into your callB which will hold response. You don't need to run callA in goroutine as you always need result from that endpoint/service
package main
import (
"fmt"
"time"
)
func main() {
resB := make(chan int)
go callB(resB)
res := callA()
if res == 200 {
fmt.Print("No Need for B")
} else {
res = <-resB
fmt.Printf("Response from B : %d", res)
}
}
func callA() int {
time.Sleep(1000)
return 200
}
func callB(res chan int) {
time.Sleep(500)
res <- 200
}
Update: As suggestion given in comment, above code leaks "callB"
package main
import (
"fmt"
"time"
)
func main() {
resB := make(chan int, 1)
go callB(resB)
res := callA()
if res == 200 {
fmt.Print("No Need for B")
} else {
res = <-resB
fmt.Printf("Response from B : %d", res)
}
}
func callA() int {
time.Sleep(1000 * time.Millisecond)
return 200
}
func callB(res chan int) {
time.Sleep(500 * time.Millisecond)
res <- 200
}

Concurrency issues with crawler

I try to build concurrent crawler based on Tour and some others SO answers regarding that. What I have currently is below but I think I have here two subtle issues.
Sometimes I get 16 urls in response and sometimes 17 (debug print in main). I know it because when I even change WriteToSlice to Read then in Read sometimes 'Read: end, counter = ' is never reached and it's always when I get 16 urls.
I have troubles with err channel, I get no messages in this channel, even when I run my main Crawl method with address like www.golang.org so without valid schema error should be send via err channel
Concurrency is really difficult topic, help and advice will be appreciated
package main
import (
"fmt"
"net/http"
"sync"
"golang.org/x/net/html"
)
type urlCache struct {
urls map[string]struct{}
sync.Mutex
}
func (v *urlCache) Set(url string) bool {
v.Lock()
defer v.Unlock()
_, exist := v.urls[url]
v.urls[url] = struct{}{}
return !exist
}
func newURLCache() *urlCache {
return &urlCache{
urls: make(map[string]struct{}),
}
}
type results struct {
data chan string
err chan error
}
func newResults() *results {
return &results{
data: make(chan string, 1),
err: make(chan error, 1),
}
}
func (r *results) close() {
close(r.data)
close(r.err)
}
func (r *results) WriteToSlice(s *[]string) {
for {
select {
case data := <-r.data:
*s = append(*s, data)
case err := <-r.err:
fmt.Println("e ", err)
}
}
}
func (r *results) Read() {
fmt.Println("Read: start")
counter := 0
for c := range r.data {
fmt.Println(c)
counter++
}
fmt.Println("Read: end, counter = ", counter)
}
func crawl(url string, depth int, wg *sync.WaitGroup, cache *urlCache, res *results) {
defer wg.Done()
if depth == 0 || !cache.Set(url) {
return
}
response, err := http.Get(url)
if err != nil {
res.err <- err
return
}
defer response.Body.Close()
node, err := html.Parse(response.Body)
if err != nil {
res.err <- err
return
}
urls := grablUrls(response, node)
res.data <- url
for _, url := range urls {
wg.Add(1)
go crawl(url, depth-1, wg, cache, res)
}
}
func grablUrls(resp *http.Response, node *html.Node) []string {
var f func(*html.Node) []string
var results []string
f = func(n *html.Node) []string {
if n.Type == html.ElementNode && n.Data == "a" {
for _, a := range n.Attr {
if a.Key != "href" {
continue
}
link, err := resp.Request.URL.Parse(a.Val)
if err != nil {
continue
}
results = append(results, link.String())
}
}
for c := n.FirstChild; c != nil; c = c.NextSibling {
f(c)
}
return results
}
res := f(node)
return res
}
// Crawl ...
func Crawl(url string, depth int) []string {
wg := &sync.WaitGroup{}
output := &[]string{}
visited := newURLCache()
results := newResults()
defer results.close()
wg.Add(1)
go crawl(url, depth, wg, visited, results)
go results.WriteToSlice(output)
// go results.Read()
wg.Wait()
return *output
}
func main() {
r := Crawl("https://www.golang.org", 2)
// r := Crawl("www.golang.org", 2) // no schema, error should be generated and send via err
fmt.Println(len(r))
}
Both your questions 1 and 2 are a result of the same bug.
In Crawl() you are not waiting for this go routine to finish: go results.WriteToSlice(output). On the last crawl() function, the wait group is released, the output is returned and printed before the WriteToSlice function finishes with the data and err channel. So what has happened is this:
crawl() finishes, placing data in results.data and results.err.
Waitgroup wait() unblocks, causing main() to print the length of the result []string
WriteToSlice adds the last data (or err) item to the channel
You need to return from Crawl() not only when the data is done being written to the channel, but also when the channel is done being read in it's entirety (including the buffer). A good way to do this is close channels when you are sure that you are done with them. By organizing your code this way, you can block on the go routine that is draining the channels, and instead of using the wait group to release to main, you wait until the channels are 100% done.
You can see this gobyexample https://gobyexample.com/closing-channels. Remember that when you close a channel, the channel can still be used until the last item is taken. So you can close a buffered channel, and the reader will still get all the items that were queued in the channel.
There is some code structure that can change to make this cleaner, but here is a quick way to fix your program. Change Crawl to block on WriteToSlice. Close the data channel when the crawl function finishes, and wait for WriteToSlice to finish.
// Crawl ...
func Crawl(url string, depth int) []string {
wg := &sync.WaitGroup{}
output := &[]string{}
visited := newURLCache()
results := newResults()
go func() {
wg.Add(1)
go crawl(url, depth, wg, visited, results)
wg.Wait()
// All data is written, this makes `WriteToSlice()` unblock
close(results.data)
}()
// This will block until results.data is closed
results.WriteToSlice(output)
close(results.err)
return *output
}
Then on write to slice, you have to check for the closed channel to exit the for loop:
func (r *results) WriteToSlice(s *[]string) {
for {
select {
case data, open := <-r.data:
if !open {
return // All data done
}
*s = append(*s, data)
case err := <-r.err:
fmt.Println("e ", err)
}
}
}
Here is the full code: https://play.golang.org/p/GBpGk-lzrhd (it won't work in the playground)

Goroutine didn't run as expected

I'm still learning Go and was doing the exercise of a web crawler as linked here. The main part I implemented is as follows. (Other parts remain the same and can be found in the link.)
// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher) {
// TODO: Fetch URLs in parallel.
// TODO: Don't fetch the same URL twice.
// This implementation doesn't do either:
if depth <= 0 {
return
}
body, urls, err := fetcher.Fetch(url)
cache.Set(url)
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q\n", url, body)
for _, u := range urls {
if cache.Get(u) == false {
fmt.Println("Next:", u)
Crawl(u, depth-1, fetcher) // I want to parallelize this
}
}
return
}
func main() {
Crawl("https://golang.org/", 4, fetcher)
}
type SafeCache struct {
v map[string]bool
mux sync.Mutex
}
func (c *SafeCache) Set(key string) {
c.mux.Lock()
c.v[key] = true
c.mux.Unlock()
}
func (c *SafeCache) Get(key string) bool {
return c.v[key]
}
var cache SafeCache = SafeCache{v: make(map[string]bool)}
When I ran the code above, the result was expected:
found: https://golang.org/ "The Go Programming Language"
Next: https://golang.org/pkg/
found: https://golang.org/pkg/ "Packages"
Next: https://golang.org/cmd/
not found: https://golang.org/cmd/
Next: https://golang.org/pkg/fmt/
found: https://golang.org/pkg/fmt/ "Package fmt"
Next: https://golang.org/pkg/os/
found: https://golang.org/pkg/os/ "Package os"
However, when I tried to parallelize the crawler (on the line with a comment in the program above) by changing Crawl(u, depth-1, fetcher) to go Crawl(u, depth-1, fetcher), the results were not as I expected:
found: https://golang.org/ "The Go Programming Language"
Next: https://golang.org/pkg/
Next: https://golang.org/cmd/
I thought directly adding a go keyword is as straightforward as it seems, but I'm not not sure what went wrong and confused on how I should best approach this problem. Any advice would be appreciated. Thank you in advance!
Your program is most likely exiting before the crawlers finish doing their work. One approach would be for the Crawl to have a WaitGroup where it waits for all of it's sub crawlers to finish. For example
import "sync"
// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher, *wg sync.WaitGroup) {
defer func() {
// If the crawler was given a wait group, signal that it's finished
if wg != nil {
wg.Done()
}
}()
if depth <= 0 {
return
}
_, urls, err := fetcher.Fetch(url)
cache.Set(url)
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q\n", url, body)
var crawlers sync.WaitGroup
for _, u := range urls {
if cache.Get(u) == false {
fmt.Println("Next:", u)
crawlers.Add(1)
go Crawl(u, depth-1, fetcher, &crawlers)
}
}
crawlers.Wait() // Waits for its sub-crawlers to finish
return
}
func main() {
// The root does not need a WaitGroup
Crawl("http://example.com/index.html", 4, nil)
}

Exercise: Web Crawler - print not working

I'm a golang newbie and currently working on Exercise: Web Crawler.
I simply put the keyword 'go' before every place where func Crawl is invoked and hope it can be parallelized, but fmt.Printf doesn't work and prints nothing. Nothing other is changed on the original code besides this one. Would someone like to give me a hand?
func Crawl(url string, depth int, fetcher Fetcher) {
// TODO: Fetch URLs in parallel.
// TODO: Don't fetch the same URL twice.
// This implementation doesn't do either:
if depth <= 0 {
return
}
body, urls, err := fetcher.Fetch(url)
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q\n", url, body)
for _, u := range urls {
go Crawl(u, depth-1, fetcher)
}
return
}
func main() {
go Crawl("https://golang.org/", 4, fetcher)
}
According to the spec
Program execution begins by initializing the main package and then invoking the function main. When that function invocation returns, the program exits. It does not wait for other (non-main) goroutines to complete.
Therefore you have to explicitly wait for the other goroutine to end in main() function.
One way is simply add time.Sleep() at the end of main() function until you think that the other goroutine ends (e.g. maybe 1 second in this case).
Cleaner way is using sync.WaitGroup as follows:
func Crawl(wg *sync.WaitGroup, url string, depth int, fetcher Fetcher) {
defer wg.Done()
if depth <= 0 {
return
}
body, urls, err := fetcher.Fetch(url)
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q\n", url, body)
for _, u := range urls {
wg.Add(1)
go Crawl(wg, u, depth-1, fetcher)
}
return
}
func main() {
wg := &sync.WaitGroup{}
wg.Add(1)
// first call does not need to be goroutine since its subroutine is goroutine.
Crawl(wg, "https://golang.org/", 4, fetcher)
//time.Sleep(1000 * time.Millisecond)
wg.Wait()
}
This code stores counter in WaitGroup, increment it using wg.Add(), decrement using wg.Done() and waits until it goes zero using wg.Wait().
Confirm it in go playground: https://play.golang.org/p/WqQBqe6iFLp

Tour of Go exercise #10: Crawler

I'm going through the Go Tour and I feel like I have a pretty good understanding of the language except for concurrency.
slide 10 is an exercise that asks the reader to parallelize a web crawler (and to make it not cover repeats but I haven't gotten there yet.)
Here is what I have so far:
func Crawl(url string, depth int, fetcher Fetcher, ch chan string) {
if depth <= 0 {
return
}
body, urls, err := fetcher.Fetch(url)
if err != nil {
ch <- fmt.Sprintln(err)
return
}
ch <- fmt.Sprintf("found: %s %q\n", url, body)
for _, u := range urls {
go Crawl(u, depth-1, fetcher, ch)
}
}
func main() {
ch := make(chan string, 100)
go Crawl("http://golang.org/", 4, fetcher, ch)
for i := range ch {
fmt.Println(i)
}
}
My question is, where do I put the close(ch) call.
If I put a defer close(ch) somewhere in the Crawl method, then the program ends up writing to a closed channel from one of the spawned goroutines, because the call to Crawl will return before the spawned goroutines do.
If I omit the call to close(ch), as I demonstrate it, the program deadlocks in the main function ranging the channel because the channel is never closed when all goroutines has returned.
A look at the Parallelization section of Effective Go leads to ideas for the solution. Essentually you have to close the channel on each return route of the function. Actually this is a nice use case of the defer statement:
func Crawl(url string, depth int, fetcher Fetcher, ret chan string) {
defer close(ret)
if depth <= 0 {
return
}
body, urls, err := fetcher.Fetch(url)
if err != nil {
ret <- err.Error()
return
}
ret <- fmt.Sprintf("found: %s %q", url, body)
result := make([]chan string, len(urls))
for i, u := range urls {
result[i] = make(chan string)
go Crawl(u, depth-1, fetcher, result[i])
}
for i := range result {
for s := range result[i] {
ret <- s
}
}
return
}
func main() {
result := make(chan string)
go Crawl("http://golang.org/", 4, fetcher, result)
for s := range result {
fmt.Println(s)
}
}
The essential difference to your code is that every instance of Crawl gets its own return channel and the caller function collects the results in its return channel.
I went with a completely different direction with this one. I might have been mislead by the tip about using a map.
// SafeUrlMap is safe to use concurrently.
type SafeUrlMap struct {
v map[string]string
mux sync.Mutex
}
func (c *SafeUrlMap) Set(key string, body string) {
c.mux.Lock()
// Lock so only one goroutine at a time can access the map c.v.
c.v[key] = body
c.mux.Unlock()
}
// Value returns mapped value for the given key.
func (c *SafeUrlMap) Value(key string) (string, bool) {
c.mux.Lock()
// Lock so only one goroutine at a time can access the map c.v.
defer c.mux.Unlock()
val, ok := c.v[key]
return val, ok
}
// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher, urlMap SafeUrlMap) {
defer wg.Done()
urlMap.Set(url, body)
if depth <= 0 {
return
}
body, urls, err := fetcher.Fetch(url)
if err != nil {
fmt.Println(err)
return
}
for _, u := range urls {
if _, ok := urlMap.Value(u); !ok {
wg.Add(1)
go Crawl(u, depth-1, fetcher, urlMap)
}
}
return
}
var wg sync.WaitGroup
func main() {
urlMap := SafeUrlMap{v: make(map[string]string)}
wg.Add(1)
go Crawl("http://golang.org/", 4, fetcher, urlMap)
wg.Wait()
for url := range urlMap.v {
body, _ := urlMap.Value(url)
fmt.Printf("found: %s %q\n", url, body)
}
}
O(1) time lookup of url on map instead of O(n) lookup on slice of all urls visited should help minimize time spent inside of the critical section, which is a trivial amount of time for this example but would become relevant with scale.
WaitGroup used to prevent top level Crawl() function from returning until all child go routines are complete.
func Crawl(url string, depth int, fetcher Fetcher) {
var str_map = make(map[string]bool)
var mux sync.Mutex
var wg sync.WaitGroup
var crawler func(string,int)
crawler = func(url string, depth int) {
defer wg.Done()
if depth <= 0 {
return
}
mux.Lock()
if _, ok := str_map[url]; ok {
mux.Unlock()
return;
}else{
str_map[url] = true
mux.Unlock()
}
body, urls, err := fetcher.Fetch(url)
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q %q\n", url, body, urls)
for _, u := range urls {
wg.Add(1)
go crawler(u, depth-1)
}
}
wg.Add(1)
crawler(url,depth)
wg.Wait()
}
func main() {
Crawl("http://golang.org/", 4, fetcher)
}
Similar idea to the accepted answer, but with no duplicate URLs fetched, and printing directly to console. defer() is not used either. We use channels to signal when goroutines complete. The SafeMap idea is lifted off the SafeCounter given previously in the tour.
For the child goroutines, we create an array of channels, and wait until every child returns, by waiting on the channel.
package main
import (
"fmt"
"sync"
)
// SafeMap is safe to use concurrently.
type SafeMap struct {
v map[string] bool
mux sync.Mutex
}
// SetVal sets the value for the given key.
func (m *SafeMap) SetVal(key string, val bool) {
m.mux.Lock()
// Lock so only one goroutine at a time can access the map c.v.
m.v[key] = val
m.mux.Unlock()
}
// Value returns the current value of the counter for the given key.
func (m *SafeMap) GetVal(key string) bool {
m.mux.Lock()
// Lock so only one goroutine at a time can access the map c.v.
defer m.mux.Unlock()
return m.v[key]
}
type Fetcher interface {
// Fetch returns the body of URL and
// a slice of URLs found on that page.
Fetch(url string) (body string, urls []string, err error)
}
// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher, status chan bool, urlMap SafeMap) {
// Check if we fetched this url previously.
if ok := urlMap.GetVal(url); ok {
//fmt.Println("Already fetched url!")
status <- true
return
}
// Marking this url as fetched already.
urlMap.SetVal(url, true)
if depth <= 0 {
status <- false
return
}
body, urls, err := fetcher.Fetch(url)
if err != nil {
fmt.Println(err)
status <- false
return
}
fmt.Printf("found: %s %q\n", url, body)
statuses := make ([]chan bool, len(urls))
for index, u := range urls {
statuses[index] = make (chan bool)
go Crawl(u, depth-1, fetcher, statuses[index], urlMap)
}
// Wait for child goroutines.
for _, childstatus := range(statuses) {
<- childstatus
}
// And now this goroutine can finish.
status <- true
return
}
func main() {
urlMap := SafeMap{v: make(map[string] bool)}
status := make(chan bool)
go Crawl("https://golang.org/", 4, fetcher, status, urlMap)
<- status
}
I think using a map (the same way we could use a set in other languages) and a mutex is the easiest approach:
func Crawl(url string, depth int, fetcher Fetcher) {
mux.Lock()
defer mux.Unlock()
if depth <= 0 || IsVisited(url) {
return
}
visit[url] = true
body, urls, err := fetcher.Fetch(url)
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q\n", url, body)
for _, u := range urls {
//
go Crawl(u, depth-1, fetcher)
}
return
}
func IsVisited(s string) bool {
_, ok := visit[s]
return ok
}
var mux sync.Mutex
var visit = make(map[string]bool)
func main() {
Crawl("https://golang.org/", 4, fetcher)
time.Sleep(time.Second)
}
Here is my solution. I use empty structs as values in the safe cache because they are not assigned any memory. I based it off os whossname's solution.
package main
import (
"fmt"
"sync"
)
type Fetcher interface {
// Fetch returns the body of URL and
// a slice of URLs found on that page.
Fetch(url string) (body string, urls []string, err error)
}
type safeCache struct {
m map[string]struct{}
c sync.Mutex
}
func (s *safeCache) Get(key string) bool {
s.c.Lock()
defer s.c.Unlock()
if _,ok:=s.m[key];!ok{
return false
}
return true
}
func (s *safeCache) Set(key string) {
s.c.Lock()
s.m[key] = struct{}{}
s.c.Unlock()
return
}
// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher, cach safeCache) {
defer wg.Done()
// TODO: Fetch URLs in parallel.
// TODO: Don't fetch the same URL twice.
// This implementation doesn't do either:
cach.Set(url)
if depth <= 0 {
return
}
body, urls, err := fetcher.Fetch(url)
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q\n", url, body)
for _, u := range urls {
if found := cach.Get(u); !found{
wg.Add(1)
go Crawl(u, depth-1, fetcher, cach)
}
}
return
}
var wg sync.WaitGroup
func main() {
urlSafe := safeCache{m: make(map[string]struct{})}
wg.Add(1)
go Crawl("https://golang.org/", 4, fetcher, urlSafe)
wg.Wait()
}
// fakeFetcher is Fetcher that returns canned results.
type fakeFetcher map[string]*fakeResult
type fakeResult struct {
body string
urls []string
}
func (f fakeFetcher) Fetch(url string) (string, []string, error) {
if res, ok := f[url]; ok {
return res.body, res.urls, nil
}
return "", nil, fmt.Errorf("not found: %s", url)
}
// fetcher is a populated fakeFetcher.
var fetcher = fakeFetcher{
"https://golang.org/": &fakeResult{
"The Go Programming Language",
[]string{
"https://golang.org/pkg/",
"https://golang.org/cmd/",
},
},
"https://golang.org/pkg/": &fakeResult{
"Packages",
[]string{
"https://golang.org/",
"https://golang.org/cmd/",
"https://golang.org/pkg/fmt/",
"https://golang.org/pkg/os/",
},
},
"https://golang.org/pkg/fmt/": &fakeResult{
"Package fmt",
[]string{
"https://golang.org/",
"https://golang.org/pkg/",
},
},
"https://golang.org/pkg/os/": &fakeResult{
"Package os",
[]string{
"https://golang.org/",
"https://golang.org/pkg/",
},
},
}
Below is my solution. Except the global map, I only had to change the contents of Crawl. Like other solutions, I used sync.Map and sync.WaitGroup. I've blocked out the important parts.
var m sync.Map
// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher) {
// This implementation doesn't do either:
if depth <= 0 {
return
}
// Don't fetch the same URL twice.
/////////////////////////////////////
_, ok := m.LoadOrStore(url, url) //
if ok { //
return //
} //
/////////////////////////////////////
body, urls, err := fetcher.Fetch(url)
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q\n", url, body)
// Fetch URLs in parallel.
/////////////////////////////////////
var wg sync.WaitGroup //
defer wg.Wait() //
for _, u := range urls { //
wg.Add(1) //
go func(u string) { //
defer wg.Done() //
Crawl(u, depth-1, fetcher) //
}(u) //
} //
/////////////////////////////////////
return
}
Here's my solution. I have a "master" routine that listens to a channel of urls and starts new crawling routine (which puts crawled urls into the channel) if it finds new urls to crawl.
Instead of explicitly closing the channel, I have a counter for unfinished crawling goroutines, and when the counter is 0, the program exits because it has nothing to wait for.
func doCrawl(url string, fetcher Fetcher, results chan []string) {
body, urls, err := fetcher.Fetch(url)
results <- urls
if err != nil {
fmt.Println(err)
} else {
fmt.Printf("found: %s %q\n", url, body)
}
}
func Crawl(url string, depth int, fetcher Fetcher) {
results := make(chan []string)
crawled := make(map[string]bool)
go doCrawl(url, fetcher, results)
// counter for unfinished crawling goroutines
toWait := 1
for urls := range results {
toWait--
for _, u := range urls {
if !crawled[u] {
crawled[u] = true
go doCrawl(u, fetcher, results)
toWait++
}
}
if toWait == 0 {
break
}
}
}
I have implemented it with a simple channel, where all the goroutines send their messages. To ensure that it is closed when there is no more goroutines I use a safe counter, that close the channel when the counter is 0.
type Msg struct {
url string
body string
}
type SafeCounter struct {
v int
mux sync.Mutex
}
func (c *SafeCounter) inc() {
c.mux.Lock()
defer c.mux.Unlock()
c.v++
}
func (c *SafeCounter) dec(ch chan Msg) {
c.mux.Lock()
defer c.mux.Unlock()
c.v--
if c.v == 0 {
close(ch)
}
}
var goes SafeCounter = SafeCounter{v: 0}
// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher, ch chan Msg) {
defer goes.dec(ch)
// TODO: Fetch URLs in parallel.
// TODO: Don't fetch the same URL twice.
// This implementation doesn't do either:
if depth <= 0 {
return
}
if !cache.existsAndRegister(url) {
body, urls, err := fetcher.Fetch(url)
if err != nil {
fmt.Println(err)
return
}
ch <- Msg{url, body}
for _, u := range urls {
goes.inc()
go Crawl(u, depth-1, fetcher, ch)
}
}
return
}
func main() {
ch := make(chan Msg, 100)
goes.inc()
go Crawl("http://golang.org/", 4, fetcher, ch)
for m := range ch {
fmt.Printf("found: %s %q\n", m.url, m.body)
}
}
Note that the safe counter must be incremented outside of the goroutine.
I passed the safeCounter and waitGroup to the Crawl function and then use safeCounter to jump over urls that have been visited, waitGroup to prevent early exit out of the current goroutine.
func Crawl(url string, depth int, fetcher Fetcher, c *SafeCounter, wg *sync.WaitGroup) {
defer wg.Done()
if depth <= 0 {
return
}
c.mux.Lock()
c.v[url]++
c.mux.Unlock()
body, urls, err := fetcher.Fetch(url)
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q\n", url, body)
for _, u := range urls {
c.mux.Lock()
i := c.v[u]
c.mux.Unlock()
if i == 1 {
continue
}
wg.Add(1)
go Crawl(u, depth-1, fetcher, c, wg)
}
return
}
func main() {
c := SafeCounter{v: make(map[string]int)}
var wg sync.WaitGroup
wg.Add(1)
Crawl("https://golang.org/", 4, fetcher, &c, &wg)
wg.Wait()
}
Here is my version (inspired by #fasmats answer) – this one prevents fetching the same URL twice by utilizing custom cache with RWMutex.
type Cache struct {
data map[string]fakeResult
mux sync.RWMutex
}
var cache = Cache{data: make(map[string]fakeResult)}
//cache adds new page to the global cache
func (c *Cache) cache(url string) fakeResult {
c.mux.Lock()
body, urls, err := fetcher.Fetch(url)
if err != nil {
body = err.Error()
}
data := fakeResult{body, urls}
c.data[url] = data
c.mux.Unlock()
return data
}
//Visit visites the page at given url and caches it if needec
func (c *Cache) Visit(url string) (data fakeResult, alreadyCached bool) {
c.mux.RLock()
data, alreadyCached = c.data[url]
c.mux.RUnlock()
if !alreadyCached {
data = c.cache(url)
}
return data, alreadyCached
}
/*
Crawl crawles all pages reachable from url and within the depth (given by args).
Fetches pages using given fetcher and caches them in the global cache.
Continously sends newly discovered pages to the out channel.
*/
func Crawl(url string, depth int, fetcher Fetcher, out chan string) {
defer close(out)
if depth <= 0 {
return
}
data, alreadyCached := cache.Visit(url)
if alreadyCached {
return
}
//send newly discovered page to out channel
out <- fmt.Sprintf("found: %s %q", url, data.body)
//visit linked pages
res := make([]chan string, len(data.urls))
for i, link := range data.urls {
res[i] = make(chan string)
go Crawl(link, depth-1, fetcher, res[i])
}
//send newly discovered pages from links to out channel
for i := range res {
for s := range res[i] {
out <- s
}
}
}
func main() {
res := make(chan string)
go Crawl("https://golang.org/", 4, fetcher, res)
for page := range res {
fmt.Println(page)
}
}
Aside from not fetching URLs twice, this solution doesn't use the fact of knowing the total number of pages in advance (works for any number of pages) and doesn't falsely limit/extend program execution time by timers.
I'm new to go, so grain of salt, but this solution seems to me like it'd be more idiomatic. It uses a single channel for all of the results, a single channel for all of the crawl requests (an attempt to crawl a specific url), and a wait group for keeping track of completion. The main Crawl call acts as the distributor of the crawl requests to worker processes (while handling deduplication) and acts as the tracker for how many crawl requests are pending.
package main
import (
"fmt"
"sync"
)
type Fetcher interface {
// Fetch returns the body of URL and
// a slice of URLs found on that page.
Fetch(url string) (body string, urls []string, err error)
}
type FetchResult struct {
url string
body string
err error
}
type CrawlRequest struct {
url string
depth int
}
type Crawler struct {
depth int
fetcher Fetcher
results chan FetchResult
crawlRequests chan CrawlRequest
urlReservations map[string]bool
waitGroup *sync.WaitGroup
}
func (crawler Crawler) Crawl(url string, depth int) {
defer crawler.waitGroup.Done()
if depth <= 0 {
return
}
body, urls, err := crawler.fetcher.Fetch(url)
crawler.results <- FetchResult{url, body, err}
if len(urls) == 0 {
return
}
crawler.waitGroup.Add(len(urls))
for _, url := range urls {
crawler.crawlRequests <- CrawlRequest{url, depth - 1}
}
}
// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher) (results chan FetchResult) {
results = make(chan FetchResult)
urlReservations := make(map[string]bool)
crawler := Crawler{
crawlRequests: make(chan CrawlRequest),
depth: depth,
fetcher: fetcher,
results: results,
waitGroup: &sync.WaitGroup{},
}
crawler.waitGroup.Add(1)
// Listen for crawlRequests, pass them through to the caller if they aren't duplicates.
go func() {
for crawlRequest := range crawler.crawlRequests {
if _, isReserved := urlReservations[crawlRequest.url]; isReserved {
crawler.waitGroup.Done()
continue
}
urlReservations[crawlRequest.url] = true
go crawler.Crawl(crawlRequest.url, crawlRequest.depth)
}
}()
// Wait for the wait group to finish, and then close the channel
go func() {
crawler.waitGroup.Wait()
close(results)
}()
// Send the first crawl request to the channel
crawler.crawlRequests <- CrawlRequest{url, depth}
return
}
func main() {
results := Crawl("https://golang.org/", 4, fetcher)
for result := range results {
if result.err != nil {
fmt.Println(result.err)
continue
}
fmt.Printf("found: %s %q\n", result.url, result.body)
}
fmt.Printf("done!")
}
// fakeFetcher is Fetcher that returns canned results.
type fakeFetcher map[string]*fakeResult
type fakeResult struct {
body string
urls []string
}
func (f fakeFetcher) Fetch(url string) (string, []string, error) {
if res, ok := f[url]; ok {
return res.body, res.urls, nil
}
return "", nil, fmt.Errorf("not found: %s", url)
}
// fetcher is a populated fakeFetcher.
var fetcher = fakeFetcher{
"https://golang.org/": &fakeResult{
"The Go Programming Language",
[]string{
"https://golang.org/pkg/",
"https://golang.org/cmd/",
},
},
"https://golang.org/pkg/": &fakeResult{
"Packages",
[]string{
"https://golang.org/",
"https://golang.org/cmd/",
"https://golang.org/pkg/fmt/",
"https://golang.org/pkg/os/",
},
},
"https://golang.org/pkg/fmt/": &fakeResult{
"Package fmt",
[]string{
"https://golang.org/",
"https://golang.org/pkg/",
},
},
"https://golang.org/pkg/os/": &fakeResult{
"Package os",
[]string{
"https://golang.org/",
"https://golang.org/pkg/",
},
},
}
Here is my solution. I had a problem that the main function doesn't wait on the goroutines to print their statuses and finish. I checked that the previous slide used a solution with waiting of 1 second before exiting, and I decided to use that approach. In practice though, I believe some coordinating mechanism is better.
import (
"fmt"
"sync"
"time"
)
type SafeMap struct {
mu sync.Mutex
v map[string]bool
}
// Sets the given key to true.
func (sm *SafeMap) Set(key string) {
sm.mu.Lock()
sm.v[key] = true
sm.mu.Unlock()
}
// Get returns the current value for the given key.
func (sm *SafeMap) Get(key string) bool {
sm.mu.Lock()
defer sm.mu.Unlock()
return sm.v[key]
}
var safeMap = SafeMap{v: make(map[string]bool)}
type Fetcher interface {
// Fetch returns the body of URL and
// a slice of URLs found on that page.
Fetch(url string) (body string, urls []string, err error)
}
// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher) {
if depth <= 0 {
return
}
// if the value exists, don't fetch it twice
if safeMap.Get(url) {
return
}
// check if there is an error fetching
body, urls, err := fetcher.Fetch(url)
safeMap.Set(url)
if err != nil {
fmt.Println(err)
return
}
// list contents and crawl recursively
fmt.Printf("found: %s %q\n", url, body)
for _, u := range urls {
go Crawl(u, depth-1, fetcher)
}
}
func main() {
go Crawl("https://golang.org/", 4, fetcher)
time.Sleep(time.Second)
}
No need to change any signatures or introduce any new stuff in the global scope. We can use a sync.WaitGroup to wait for the recurse goroutines to finish. A map from a strings to empty structs acts as a set, and is the most efficient way to keep count of the already crawled URLs.
func Crawl(url string, depth int, fetcher Fetcher) {
visited := make(map[string]struct{})
var mu sync.Mutex
var wg sync.WaitGroup
var recurse func(string, int)
recurse = func(url string, depth int) {
defer wg.Done()
if depth <= 0 {
return
}
mu.Lock()
defer mu.Unlock()
if _, ok := visited[url]; ok {
return
}
visited[url] = struct{}{}
body, urls, err := fetcher.Fetch(url)
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q\n", url, body)
for _, u := range urls {
wg.Add(1)
go recurse(u, depth-1)
}
}
wg.Add(1)
go recurse(url, depth)
wg.Wait()
}
func main() {
Crawl("https://golang.org/", 4, fetcher)
}
Full demo on the Go Playground
I use slice to avoid crawl the url twice,the recursive version without the concurrency is ok, but not sure about this concurrency version.
func Crawl(url string, depth int, fetcher Fetcher) {
var str_arrs []string
var mux sync.Mutex
var crawl func(string, int)
crawl = func(url string, depth int) {
if depth <= 0 {
return
}
mux.Lock()
for _, v := range str_arrs {
if url == v {
mux.Unlock()
return
}
}
str_arrs = append(str_arrs, url)
mux.Unlock()
body, urls, err := fetcher.Fetch(url)
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q\n", url, body)
for _, u := range urls {
go crawl(u, depth-1) // could delete “go” then it is recursive
}
}
crawl(url, depth)
return
}
func main() {
Crawl("http://golang.org/", 4, fetcher)
}
Here's my solution, using sync.WaitGroup and a SafeCache of fetched urls:
package main
import (
"fmt"
"sync"
)
type Fetcher interface {
// Fetch returns the body of URL and
// a slice of URLs found on that page.
Fetch(url string) (body string, urls []string, err error)
}
// Safe to use concurrently
type SafeCache struct {
fetched map[string]string
mux sync.Mutex
}
func (c *SafeCache) Add(url, body string) {
c.mux.Lock()
defer c.mux.Unlock()
if _, ok := c.fetched[url]; !ok {
c.fetched[url] = body
}
}
func (c *SafeCache) Contains(url string) bool {
c.mux.Lock()
defer c.mux.Unlock()
_, ok := c.fetched[url]
return ok
}
// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher, cache SafeCache,
wg *sync.WaitGroup) {
defer wg.Done()
if depth <= 0 {
return
}
body, urls, err := fetcher.Fetch(url)
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q\n", url, body)
cache.Add(url, body)
for _, u := range urls {
if !cache.Contains(u) {
wg.Add(1)
go Crawl(u, depth-1, fetcher, cache, wg)
}
}
return
}
func main() {
cache := SafeCache{fetched: make(map[string]string)}
var wg sync.WaitGroup
wg.Add(1)
Crawl("http://golang.org/", 4, fetcher, cache, &wg)
wg.Wait()
}
Below is a simple solution for parallelization using only sync waitGroup.
var fetchedUrlMap = make(map[string]bool)
var mutex sync.Mutex
func Crawl(url string, depth int, fetcher Fetcher) {
//fmt.Println("In Crawl2 with url" , url)
if _, ok := fetchedUrlMap[url]; ok {
return
}
if depth <= 0 {
return
}
body, urls, err := fetcher.Fetch(url)
mutex.Lock()
fetchedUrlMap[url] = true
mutex.Unlock()
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q\n", url, body)
var wg sync.WaitGroup
for _, u := range urls {
// fmt.Println("Solving for ", u)
wg.Add(1)
go func(uv string) {
Crawl(uv, depth-1, fetcher)
wg.Done()
}(u)
}
wg.Wait()
}
Here is my solution :)
package main
import (
"fmt"
"runtime"
"sync"
)
type Fetcher interface {
// Fetch returns the body of URL and
// a slice of URLs found on that page.
Fetch(url string) (body string, urls []string, err error)
}
// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher, set map[string]bool) {
// TODO: Fetch URLs in parallel.
// TODO: Don't fetch the same URL twice.
// This implementation doesn't do either:
if depth <= 0 {
return
}
// use a set to identify if the URL should be traversed or not
if set[url] == true {
wg.Done()
return
} else {
fmt.Println(runtime.NumGoroutine())
set[url] = true
body, urls, err := fetcher.Fetch(url)
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q\n", url, body)
for _, u := range urls {
Crawl(u, depth-1, fetcher, set)
}
}
}
var wg sync.WaitGroup
func main() {
wg.Add(6)
collectedURLs := make(map[string]bool)
go Crawl("https://golang.org/", 4, fetcher, collectedURLs)
wg.Wait()
}
// fakeFetcher is Fetcher that returns canned results.
type fakeFetcher map[string]*fakeResult
type fakeResult struct {
body string
urls []string
}
func (f fakeFetcher) Fetch(url string) (string, []string, error) {
if res, ok := f[url]; ok {
return res.body, res.urls, nil
}
return "", nil, fmt.Errorf("not found: %s", url)
}
// fetcher is a populated fakeFetcher.
var fetcher = fakeFetcher{
"https://golang.org/": &fakeResult{
"The Go Programming Language",
[]string{
"https://golang.org/pkg/",
"https://golang.org/cmd/",
},
},
"https://golang.org/pkg/": &fakeResult{
"Packages",
[]string{
"https://golang.org/",
"https://golang.org/cmd/",
"https://golang.org/pkg/fmt/",
"https://golang.org/pkg/os/",
},
},
"https://golang.org/pkg/fmt/": &fakeResult{
"Package fmt",
[]string{
"https://golang.org/",
"https://golang.org/pkg/",
},
},
"https://golang.org/pkg/os/": &fakeResult{
"Package os",
[]string{
"https://golang.org/",
"https://golang.org/pkg/",
},
},
}
Since most of the Solutions here don't work out for me (including accepted answer), I'll upload my own inspired by Kamil (special thanks :) (no dups/all valid URLs)
package main
import (
"fmt"
"runtime"
"sync"
)
type Fetcher interface {
// Fetch returns the body of URL and
// a slice of URLs found on that page.
Fetch(url string) (body string, urls []string, err error)
}
// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher, set map[string]bool) {
// TODO: Fetch URLs in parallel.
// TODO: Don't fetch the same URL twice.
if depth <= 0 { return }
// use a set to identify if the URL should be traversed or not
fmt.Println(runtime.NumGoroutine())
set[url] = true
body, urls, err := fetcher.Fetch(url)
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q\n", url, body)
for _, u := range urls {
if set[u] == false {
wg.Add(1)
go Crawl(u, depth-1, fetcher, set)
}
}
wg.Done()
}
var wg sync.WaitGroup
func main() {
collectedURLs := make(map[string]bool)
Crawl("https://golang.org/", 4, fetcher, collectedURLs)
wg.Wait()
}
/*
Exercise: Web Crawler
In this exercise you'll use Go's concurrency features to parallelize a web crawler.
Modify the Crawl function to fetch URLs in parallel without fetching the same URL twice.
Hint: you can keep a cache of the URLs that have been fetched on a map, but maps alone are not safe for concurrent use!
*/
package main
import (
"fmt"
"sync"
"time"
)
type Fetcher interface {
// Fetch returns the body of URL and
// a slice of URLs found on that page.
Fetch(url string) (body string, urls []string, err error)
}
type Response struct {
url string
urls []string
body string
err error
}
var ch chan Response = make(chan Response)
var fetched map[string]bool = make(map[string]bool)
var wg sync.WaitGroup
var mu sync.Mutex
// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher) {
// TODO: Fetch URLs in parallel.
// TODO: Don't fetch the same URL twice.
// This implementation doesn't do either:
var fetch func(url string, depth int, fetcher Fetcher)
wg.Add(1)
recv := func() {
for res := range ch {
body, _, err := res.body, res.urls, res.err
if err != nil {
fmt.Println(err)
continue
}
fmt.Printf("found: %s %q\n", url, body)
}
}
fetch = func(url string, depth int, fetcher Fetcher) {
time.Sleep(time.Second / 2)
defer wg.Done()
if depth <= 0 || fetched[url] {
return
}
mu.Lock()
fetched[url] = true
mu.Unlock()
body, urls, err := fetcher.Fetch(url)
for _, u := range urls {
wg.Add(1)
go fetch(u, depth-1, fetcher)
}
ch <- Response{url, urls, body, err}
}
go fetch(url, depth, fetcher)
go recv()
return
}
func main() {
Crawl("https://golang.org/", 4, fetcher)
wg.Wait()
}
// fakeFetcher is Fetcher that returns canned results.
type fakeFetcher map[string]*fakeResult
type fakeResult struct {
body string
urls []string
}
func (f fakeFetcher) Fetch(url string) (string, []string, error) {
if res, ok := f[url]; ok {
return res.body, res.urls, nil
}
return "", nil, fmt.Errorf("not found: %s", url)
}
// fetcher is a populated fakeFetcher.
var fetcher = fakeFetcher{
"https://golang.org/": &fakeResult{
"The Go Programming Language",
[]string{
"https://golang.org/pkg/",
"https://golang.org/cmd1/",
},
},
"https://golang.org/pkg/": &fakeResult{
"Packages",
[]string{
"https://golang.org/",
"https://golang.org/cmd2/",
"https://golang.org/pkg/fmt/",
"https://golang.org/pkg/os/",
},
},
"https://golang.org/pkg/fmt/": &fakeResult{
"Package fmt",
[]string{
"https://golang.org/",
"https://golang.org/pkg/",
},
},
"https://golang.org/pkg/os/": &fakeResult{
"Package os",
[]string{
"https://golang.org/",
"https://golang.org/pkg/",
},
},
}
https://gist.github.com/gaogao1030/5d63ed925534f3610ccb7e25ed46992a
Super-simple solution, using one channel per fetched URL to wait for GoRoutines crawling the URLs in the corresponding body.
Duplicate URLs are avoided using a UrlCache struct with a mutex and a map[string]struct{} (this saves memory wrt a map of booleans).
Side effects, potentially causing deadlocks, are mitigated by using defer for both mutex locking and channels writes.
package main
import (
"fmt"
"sync"
)
type UrlCache struct {
v map[string]struct{}
mux sync.Mutex
}
func NewUrlCache() *UrlCache {
res := UrlCache{}
res.v = make(map[string]struct{})
return &res
}
func (c *UrlCache) check(url string) bool {
c.mux.Lock()
defer c.mux.Unlock()
if _, p := c.v[url]; !p {
c.v[url] = struct{}{}
return false
}
return true
}
type Fetcher interface {
// Fetch returns the body of URL and
// a slice of URLs found on that page.
Fetch(url string) (body string, urls []string, err error)
}
// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher, uc *UrlCache, c chan struct{}) {
defer func() { c <- struct{}{} }()
if depth <= 0 {
return
}
if uc.check(url) {
return
}
body, urls, err := fetcher.Fetch(url)
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q\n", url, body)
ci := make(chan struct{})
for _, u := range urls {
go Crawl(u, depth-1, fetcher, uc, ci)
}
// Wait for parallel crowl to finish
for range urls {
<-ci
}
}
func main() {
c := make(chan struct{})
go Crawl("https://golang.org/", 4, fetcher, NewUrlCache(), c)
<-c
}
// fakeFetcher is Fetcher that returns canned results.
type fakeFetcher map[string]*fakeResult
type fakeResult struct {
body string
urls []string
}
func (f fakeFetcher) Fetch(url string) (string, []string, error) {
if res, ok := f[url]; ok {
return res.body, res.urls, nil
}
return "", nil, fmt.Errorf("not found: %s", url)
}
// fetcher is a populated fakeFetcher.
var fetcher = fakeFetcher{
"https://golang.org/": &fakeResult{
"The Go Programming Language",
[]string{
"https://golang.org/pkg/",
"https://golang.org/cmd/",
},
},
"https://golang.org/pkg/": &fakeResult{
"Packages",
[]string{
"https://golang.org/",
"https://golang.org/cmd/",
"https://golang.org/pkg/fmt/",
"https://golang.org/pkg/os/",
},
},
"https://golang.org/pkg/fmt/": &fakeResult{
"Package fmt",
[]string{
"https://golang.org/",
"https://golang.org/pkg/",
},
},
"https://golang.org/pkg/os/": &fakeResult{
"Package os",
[]string{
"https://golang.org/",
"https://golang.org/pkg/",
},
},
}
Below is my solution. Defer is a really powerful semantic in golang.
var urlcache FetchedUrls
func (urlcache *FetchedUrls) CacheIfNotPresent(url string) bool {
urlcache.m.Lock()
defer urlcache.m.Unlock()
_, ok := urlcache.urls[url]
if !ok {
urlcache.urls[url] = true
}
return !ok
}
func BlockOnChan(ch chan int) {
<-ch
}
// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher, ch chan int) {
defer close(ch)
if depth <= 0 {
return
}
if !urlcache.CacheIfNotPresent(url) {
return
}
body, urls, err := fetcher.Fetch(url)
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q\n", url, body)
for _, u := range urls {
fch := make(chan int)
defer BlockOnChan(fch)
go Crawl(u, depth-1, fetcher, fch)
}
}
func main() {
urlcache.urls = make(map[string]bool)
Crawl("https://golang.org/", 4, fetcher, make(chan int))
}
Adding my solution for others to reference. Hope it helps
Being able to compare our different approaches is just great!
You can try below code in Go playground
func Crawl(url string, depth int, fetcher Fetcher) {
defer wg.Done()
if depth <= 0 {
return
} else if _, ok := fetched.Load(url); ok {
fmt.Printf("Skipping (already fetched): %s\n", url)
return
}
body, urls, err := fetcher.Fetch(url)
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q\n", url, body)
fetched.Store(url, nil)
for _, u := range urls {
wg.Add(1)
go Crawl(u, depth-1, fetcher)
}
}
// As there could be many types of events leading to errors when
// fetching a url, only marking it when it is correctly processed
var fetched sync.Map
// For each Crawl, wg is incremented, and it waits for all to finish
// on main method
var wg sync.WaitGroup
func main() {
wg.Add(1)
go Crawl("https://golang.org/", 4, fetcher)
wg.Wait()
}
Using mutex and channels
package main
import (
"fmt"
"sync"
)
type SafeMap struct {
mu sync.Mutex
seen map[string]bool
}
func (s *SafeMap) getVal(url string) bool {
s.mu.Lock()
defer s.mu.Unlock()
return s.seen[url]
}
func (s *SafeMap) setVal(url string) {
s.mu.Lock()
defer s.mu.Unlock()
s.seen[url] = true
}
var s = SafeMap{seen: make(map[string]bool)}
type Fetcher interface {
// Fetch returns the body of URL and
// a slice of URLs found on that page.
Fetch(url string) (body string, urls []string, err error)
}
// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher, ch chan bool) {
// TODO: Fetch URLs in parallel.
// TODO: Don't fetch the same URL twice.
// This implementation doesn't do either:
if depth <= 0 || s.getVal(url) {
ch <- false
return
}
body, urls, err := fetcher.Fetch(url)
s.setVal(url)
if err != nil {
fmt.Println(err)
ch <- false
return
}
fmt.Printf("found: %s %q\n", url, body)
chs := make(map[string]chan bool, len(urls))
for _, u := range urls {
chs[u] = make(chan bool)
go Crawl(u, depth-1, fetcher, chs[u])
}
for _,v := range urls {
<-chs[v]
}
ch <- true
return
}
func main() {
ch := make(chan bool)
go Crawl("https://golang.org/", 4, fetcher, ch)
<-ch
}
// fakeFetcher is Fetcher that returns canned results.
type fakeFetcher map[string]*fakeResult
type fakeResult struct {
body string
urls []string
}
func (f fakeFetcher) Fetch(url string) (string, []string, error) {
if res, ok := f[url]; ok {
return res.body, res.urls, nil
}
return "", nil, fmt.Errorf("not found: %s", url)
}
// fetcher is a populated fakeFetcher.
var fetcher = fakeFetcher{
"https://golang.org/": &fakeResult{
"The Go Programming Language",
[]string{
"https://golang.org/pkg/",
"https://golang.org/cmd/",
},
},
"https://golang.org/pkg/": &fakeResult{
"Packages",
[]string{
"https://golang.org/",
"https://golang.org/cmd/",
"https://golang.org/pkg/fmt/",
"https://golang.org/pkg/os/",
},
},
"https://golang.org/pkg/fmt/": &fakeResult{
"Package fmt",
[]string{
"https://golang.org/",
"https://golang.org/pkg/",
},
},
"https://golang.org/pkg/os/": &fakeResult{
"Package os",
[]string{
"https://golang.org/",
"https://golang.org/pkg/",
},
},
}
You can solve the problem of closing the channel by using a sync.WaitGroup and spawning a separate goroutine to close the channel.
This solution does not solve for the requirement to avoid repeated visits to urls.
func Crawl(url string, depth int, fetcher Fetcher, ch chan string, wg *sync.WaitGroup) {
defer wg.Done()
if depth <= 0 {
return
}
body, urls, err := fetcher.Fetch(url)
if err != nil {
ch <- fmt.Sprintln(err)
return
}
ch <- fmt.Sprintf("found: %s %q", url, body)
for _, u := range urls {
wg.Add(1)
go Crawl(u, depth-1, fetcher, ch, wg)
}
}
func main() {
ch := make(chan string)
var wg sync.WaitGroup
wg.Add(1)
go Crawl("https://golang.org/", 4, fetcher, ch, &wg)
go func() {
wg.Wait()
close(ch)
}()
for i := range ch {
fmt.Println(i)
}
}

Resources