How to Setup Proxy in Selenium in 2024

Jan 9, 2024 ยท 7 min read

Web scraping is a handy way to extract large volumes of data from websites. However, if you scrape aggressively without precautions, you'll likely get blocked by target sites.

Enter proxies - the secret weapon that helps you scrape undetected!

In this comprehensive guide, you'll learn how to configure proxies in Selenium to evade blocks and scale your web scrapers.

Isn't Scraping Without Proxies Easier? Common Proxy Misconceptions

When I first started web scraping, proxies seemed complicated. I wished for a magic wand that let me scrape freely without needing them!

Over time, however, I learned that proxies are indispensable for real-world scraping. Here are some common proxy myths busted:

Myth: Scraping only a few pages per site avoids blocks

Reality: Target sites track your overall usage across days. Even low volumes get detected over time.

Myth: Blocks only happen for illegal scraping activities

Reality: Sites block aggressively to prevent automation. Benign scraping also raises red flags.

Myth: Proxies introduce scraping complexity

Reality: selenium-wire and browser extensions simplify configurations now. The extra work is well worth it!

So proxies aren't the villain - they help you scrape data that would be otherwise inaccessible!

Why are Proxies So Beneficial for Web Scraping?

Proxies act as intermediaries between your scraper and target sites:

This gives several key advantages:

Anonymity: Target sites see the proxy server's IP instead of yours, making your scraper harder to fingerprint.

Geo-targeting: Proxies let you appear to be browsing from anywhere in the world!

Rotation: Switching between proxy IPs mimics real user behavior, preventing usage-based blocks.

Troubleshooting: Having proxy access helps diagnose blocks through isolating failures to individual IPs.

Now that you see their perks, let's jump into integrating proxies into your Selenium setup!

Selenium Proxy Configuration Basics

To use proxies with Selenium, the first step is installing dependencies:

pip install selenium selenium-wire webdriver-manager

selenium-wire simplifies proxy handling tremendously compared to default Selenium.

For Chrome/Firefox, proxy configuration involves:

  1. Defining proxies in a dict
  2. Passing them to selenium-wire options
  3. Initializing the driver with those options

Here's a basic example:

from seleniumwire import webdriver
from webdriver_manager.chrome import ChromeDriverManager

proxies = {
    'http': '<http://192.168.1.1:8080>',
    'https': '<http://192.168.1.1:8080>'
}

options = {
    'proxy': proxies
}

driver = webdriver.Chrome(
    ChromeDriverManager().install(),
    seleniumwire_options=options
)

This opens Chrome using the given HTTP proxy for all requests!

โ˜๏ธ Gotcha: Don't include the protocol in proxy URLs when setting them in options.

Now you know how to add basic unauthenticated proxies to your scraper. Next up, handling proxies requiring logins!

Proxy Authentication - Giving Your Proxies Secret Passwords

Many paid proxy services provide username/password authenticated access to their pools, requiring special handling.

When I first tried authenticating proxies, I spent hours banging my head debugging weird errors! ๐Ÿ’ข

Here are two methods that finally worked for me:

Browser Extensions

This approach involves:

  1. Creating a custom browser extension manifest
  2. Adding background logic to auto-login to proxies
  3. Loading your custom extension in Selenium options

Here's a sample manifest:

{
  "version": "1.0.0",
  "background": {
    "scripts": ["background.js"]
  },
  "permissions": [
    "proxy"
  ]
}

And background script:

//Proxy auth credentials
var credentials = {
  username: 'my_username',
  password: 'my_password'
};

chrome.proxy.onAuthRequired.addListener((req) => {

  //Auto-supply stored credentials
  return credentials;

});

We can then load this into Chrome:

from selenium import webdriver

options = webdriver.ChromeOptions()
options.add_extension('./proxy_extension')

driver = webdriver.Chrome(options=options)

What happens behind the scenes:

  1. Our extension listens for proxy auth requests
  2. The stored credentials get auto-filled in response

No more manual popup handling! ๐ŸŽ‰

selenium-wire

If browser extensions seem complicated, selenium-wire makes proxy auth brain-dead simple:

from seleniumwire import webdriver

#Setup credentials
proxies = {
  'http': '<http://my_username:[email protected]:8080>',
  'https': '<http://my_username:[email protected]:8080>'
}

#Create driver
options = {
    'proxy': proxies
}

driver = webdriver.Chrome(seleniumwire_options=options)

Just pack the credentials straight into URLs! On auth popups, selenium-wire inserts them under the hood.

Both methods ensure your proxies stay accessible when rotating IPs across scraping jobs.

โ˜๏ธ With great proxy power comes great responsibility! Use ethically scraped data only!

Now let's look at leveraging proxies programmatically for maximum stealth.

Rotating Proxies - Going Incognito Across IP Addresses

The key to avoiding usage-based blocks is automating proxy rotation in your scraper. This shuffles the exit IPs used, mimicking human behavior:

Here's sample logic do it with a few residential proxies:

import random

proxies = [
  '<http://user1:[email protected]:8080>',
  '<http://user2:[email protected]:8080>',
]


def fetch_page():

  random_proxy = random.choice(proxies)

  driver = webdriver.Chrome(seleniumwire_options={
      'proxy': {
          'http': random_proxy
      }
  })

  driver.get(my_url)
  # scraping logic...

for _ in range(10):
  fetch_page()

For each request, we pick a random proxy and create a new Selenium instance leveraging it.

This constantly varies the egress IP hitting the sites! ๐Ÿฅท

โ˜๏ธ Caveat: Free proxies often max out on connections if used heavily. Using premium residential proxies is recommended for serious scraping.

What happens though when you still get blocked with proxies? Here's my special troubleshooting formula!

Busting Through Blocks - My Proxy Troubleshooting Formula

Over years of web scraping, I narrowed down an exact checklist for diagnosing proxy failures:

My process goes:

  1. Test fresh proxy - Create new Selenium instance with different untouched proxy from pool
  2. Compare headers - Print and contrast request headers between working vs blocked proxies
  3. Retry endpoint - Issue curl request without Selenium browser to isolate issue
  4. Check tools - Test proxies in online checker tools to flag bad IPs
  5. Call provider - Notify proxy vendor for unblocking assistance if organic blocks detected
  6. Rotate more - Increase automated rotation frequency if needed

Following this blueprint methodically helped me identify and fix myriad tricky proxy errors.

The key is having enough good-quality proxies to systematically isolate problems.

Manually maintaining and debugging proxy clusters was ultimately unsustainable for my web scraping though...

Leveraging Proxy Services - Outsourcing Proxy Management to the Experts

Running proxies in-house has challenges:

  • Expiry and block monitoring
  • Credential distribution
  • Optimizing rotation settings
  • Performance benchmarking
  • Initially I insisted on controlling proxies myself - it felt more flexible having everything on-premise.

    Over time however, proxy management became a devops nightmare distracting from actual scraping!

    Proxy APIs like ProxiesAPI finally enabled me to outsource proxies as a managed service!

    Instead of handling proxies directly, my scraper now calls a simple API endpoint:

    <http://api.proxiesapi.com/?key=xxx&url=https://targetsite.com&render=true>
    

    This renders JavaScript behind the scenes using rotating, high-quality residential proxies! ๐Ÿš€

    I faced fewer blocks with the ProxiesAPI integration than even my in-house proxy servers!

    Benefits I observed:

    โœ… One-line setup - No complex configuration

    โœ… Instant scaling - Millions of proxies available on-demand

    โœ… Global IPs - Great regional coverage to mimic users globally

    โœ… Reliability - Robust infrastructure, SLAs, and responsive support

    โœ… Affordability - Pay-per-use pricing and 1K free credits

    If you're struggling with proxy management overhead, I highly recommend proxy services!

    Key Takeaways - Level Up Your Proxy Game

    Proxies are indispensable for real-world web scraping while avoiding blocks. Here are main learnings:

    โ“Bust proxy misconceptions - Proxies don't inherently complicate scraping when done right

    โ“Understand proxy benefits - Anonymity, rotation, troubleshooting - proxies power unhindered data collection!

    โ“Master base configurations - Chrome, Firefox - Cover both browsers

    โ“Handle authentication - Extensions, selenium-wire - Simplify credential management

    โ“Rotate IPs - Vary crawling source IPs programmatically

    โ“Methodically troubleshoot - My step-by-step blueprint for diagnosing proxy failures

    โ“Consider proxy services - ProxiesAPI, Luminati, Oxylabs - Leverage managed proxies!

    Learning proxies deeply unlocked new levels of stability and scale in my web scraping. Hope these lessons level up your proxy game too!

    As next steps, I recommend digging into advanced logic like dynamically assigning new proxies on block detection.

    Happy proxy-powered scraping! ๐Ÿ˜Ž

    Browse by tags:

    Browse by language:

    The easiest way to do Web Scraping

    Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you


    Try ProxiesAPI for free

    curl "http://api.proxiesapi.com/?key=API_KEY&url=https://example.com"

    <!doctype html>
    <html>
    <head>
        <title>Example Domain</title>
        <meta charset="utf-8" />
        <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />
    ...

    X

    Don't leave just yet!

    Enter your email below to claim your free API key: