Web scraping is useful to programmatically extract data from websites. Often you need to scrape multiple pages from a site to gather complete information. In this article, we'll see how to scrape multiple pages in Go using the net/http and goquery packages.
Prerequisites
To follow along, you'll need:
import (
"net/http"
"github.com/PuerkitoBio/goquery"
)
Define Base URL
—
We'll scrape a blog -
<https://copyblogger.com/blog/>
<https://copyblogger.com/blog/page/2/>
<https://copyblogger.com/blog/page/3/>
Let's define the base URL pattern:
baseURL := "<https://copyblogger.com/blog/page/%d/>"
The
Specify Number of Pages
Next, we'll specify how many pages to scrape. Let's scrape the first 5 pages:
numPages := 5
Loop Through Pages
We can now loop from 1 to
for pageNum := 1; pageNum <= numPages; pageNum++ {
// Construct page URL
url := fmt.Sprintf(baseURL, pageNum)
// Code to scrape each page
}
Send Request and Parse HTML
Inside the loop, we'll send a GET request and parse the HTML using goquery:
res, err := http.Get(url)
if err != nil {
log.Fatal(err)
}
defer res.Body.Close()
doc, err := goquery.NewDocumentFromReader(res.Body)
This gives us a parsed HTML document to extract data from.
Extract Data
Now within the loop we can use
For example, to get all article elements:
doc.Find("article").Each(func(i int, s *goquery.Selection) {
// Extract data from selection
})
We can loop through the selections and extract information like title, URL, author etc.
Full Code
Our full code to scrape 5 pages is:
package main
import (
"fmt"
"net/http"
"log"
"github.com/PuerkitoBio/goquery"
)
func main() {
baseURL := "<https://copyblogger.com/blog/page/%d/>"
numPages := 5
for pageNum := 1; pageNum <= numPages; pageNum++ {
url := fmt.Sprintf(baseURL, pageNum)
res, err := http.Get(url)
if err != nil {
log.Fatal(err)
}
defer res.Body.Close()
doc, err := goquery.NewDocumentFromReader(res.Body)
doc.Find("article").Each(func(i int, s *goquery.Selection) {
// Extract data from selection
title := s.Find("h2.entry-title").Text()
url, _ := s.Find("a.entry-title-link").Attr("href")
author := s.Find("div.post-author a").Text()
var categories []string
s.Find("div.entry-categories a").Each(func(i int, s *goquery.Selection) {
categories = append(categories, s.Text())
})
// Print extracted data
fmt.Printf("Title: %s\\n", title)
fmt.Printf("URL: %s\\n", url)
fmt.Printf("Author: %s\\n", author)
fmt.Printf("Categories: %s\\n\\n", categories)
})
}
}
This allows us to scrape and extract data from multiple pages sequentially. The code can be extended to scrape any number of pages.
Summary
Web scraping enables collecting large datasets programmatically. With the techniques here, you can scrape and extract information from multiple pages of a website in Go.
While these examples are great for learning, scraping production-level sites can pose challenges like CAPTCHAs, IP blocks, and bot detection. Rotating proxies and automated CAPTCHA solving can help.
Proxies API offers a simple API for rendering pages with built-in proxy rotation, CAPTCHA solving, and evasion of IP blocks. You can fetch rendered pages in any language without configuring browsers or proxies yourself.
This allows scraping at scale without headaches of IP blocks. Proxies API has a free tier to get started. Check out the API and sign up for an API key to supercharge your web scraping.
With the power of Proxies API combined with Python libraries like Beautiful Soup, you can scrape data at scale without getting blocked.