Skip to main content
Code Review

Return to Question

edited tags
Link
mjolka
  • 16.3k
  • 2
  • 30
  • 73
Notice removed Draw attention by Community Bot
Bounty Ended with no winning answer by Community Bot
Notice added Draw attention by rolfl
Bounty Started worth 50 reputation by rolfl
Tweeted twitter.com/#!/StackCodeReview/status/535220958274260994
Source Link
mjolka
  • 16.3k
  • 2
  • 30
  • 73

Go Go Gadget Web Crawler

In A Tour of Go, you are given the following problem:

In this exercise you'll use Go's concurrency features to parallelize a web crawler.

Modify the Crawl function to fetch URLs in parallel without fetching the same URL twice.

This is the code you're given

package main
import (
 "fmt"
)
type Fetcher interface {
 // Fetch returns the body of URL and
 // a slice of URLs found on that page.
 Fetch(url string) (body string, urls []string, err error)
}
// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher) {
 // TODO: Fetch URLs in parallel.
 // TODO: Don't fetch the same URL twice.
 // This implementation doesn't do either:
 if depth <= 0 {
 return
 }
 body, urls, err := fetcher.Fetch(url)
 if err != nil {
 fmt.Println(err)
 return
 }
 fmt.Printf("found: %s %q\n", url, body)
 for _, u := range urls {
 Crawl(u, depth-1, fetcher)
 }
 return
}
func main() {
 Crawl("http://golang.org/", 4, fetcher)
}
// fakeFetcher is Fetcher that returns canned results.
type fakeFetcher map[string]*fakeResult
type fakeResult struct {
 body string
 urls []string
}
func (f fakeFetcher) Fetch(url string) (string, []string, error) {
 if res, ok := f[url]; ok {
 return res.body, res.urls, nil
 }
 return "", nil, fmt.Errorf("not found: %s", url)
}
// fetcher is a populated fakeFetcher.
var fetcher = fakeFetcher{
 "http://golang.org/": &fakeResult{
 "The Go Programming Language",
 []string{
 "http://golang.org/pkg/",
 "http://golang.org/cmd/",
 },
 },
 "http://golang.org/pkg/": &fakeResult{
 "Packages",
 []string{
 "http://golang.org/",
 "http://golang.org/cmd/",
 "http://golang.org/pkg/fmt/",
 "http://golang.org/pkg/os/",
 },
 },
 "http://golang.org/pkg/fmt/": &fakeResult{
 "Package fmt",
 []string{
 "http://golang.org/",
 "http://golang.org/pkg/",
 },
 },
 "http://golang.org/pkg/os/": &fakeResult{
 "Package os",
 []string{
 "http://golang.org/",
 "http://golang.org/pkg/",
 },
 },
}

And this is the solution I came up with

type Job struct {
 url string
 depth int
 done chan bool
}
func dedup(fetcher Fetcher, ch chan Job) {
 seen := make(map[string]bool)
 for job := range ch {
 if _, ok := seen[job.url]; ok || job.depth <= 0 {
 job.done <- true
 continue
 }
 seen[job.url] = true
 go Crawl(job, fetcher, ch)
 }
}
func Crawl(job Job, fetcher Fetcher, q chan Job) {
 body, urls, err := fetcher.Fetch(job.url)
 if err != nil {
 fmt.Println(err)
 job.done <- true
 return
 }
 fmt.Printf("found: %s %q\n", job.url, body)
 ch := make(chan bool, len(urls))
 for _, u := range urls {
 u := u
 go func() { q <- Job{u, job.depth - 1, ch} }()
 }
 for i := 0; i < len(urls); i++ {
 <-ch
 }
 job.done <- true
 return
}
func main() {
 done := make(chan bool, 1)
 q := make(chan Job)
 go dedup(fetcher, q)
 q <- Job{"http://golang.org/", 4, done}
 <-done
}

All jobs go through dedup's channel, so that a given URL is fetched only once. dedup kicks off a goroutine for each unique URL it sees, and the goroutine then adds URLs from that page back to the channel.

Go's way of thinking is new to me, so my main concerns are

  • Is it idiomatic?
  • Is it clear what's going on to someone who's familiar with Go?
  • Is buffering the channels in this way necessary (or even helpful)?
lang-golang

AltStyle によって変換されたページ (->オリジナル) /