Python provides several modules for making HTTP requests, with the main ones being the built-in urllib module and the popular third-party requests library. But there's another option that fills an important niche - urllib3. Here's when urllib3 can be useful.
urllib3 focuses specifically on the underlying HTTP connection - it handles all the socket level logic and TLS encryption. This allows it to provide useful capabilities like:
Here's a quick example of making a GET request with urllib3:
import urllib3
http = urllib3.PoolManager()
r = http.request('GET', 'http://example.com')
print(r.data)
So when would you use urllib3 instead of urllib or requests?
The main downsides are that urllib3 has a lower-level API and you lose some conveniences of requests like automatic JSON decoding. It also does not have authentication or session support built-in.
So in summary, consider using urllib3:
For most basic API clients, requests tends to be the best choice. But for large scraping or crawling projects or advanced use cases, take a look at urllib3!