One of the handy things about the Python Requests module is its built-in error handling. Requests does a lot of work under the hood to catch errors and raise exceptions when things go wrong. This saves developers time and effort when making HTTP requests.
However, you still need to write code to handle those errors gracefully to make your application resilient. In this article, we'll cover some common errors you may encounter when a URL fails and how to catch and handle them properly.
Common Errors
Some errors you may see when making a request to a bad URL:
Handling Errors
We can use try/except blocks to catch requests errors:
import requests
try:
response = requests.get("http://badurl")
except requests.ConnectionError as ce:
print("Failed to connect:", ce)
except requests.Timeout as te:
print("Request timed out:", te)
except requests.RequestException as re:
print("There was an error:", re)
This allows your program to continue executing despite the failure.
Retrying Failed Requests
For transient errors, you may want to retry the request before giving up. The
session = requests.Session()
tries = 3
for attempt in range(tries):
try:
response = session.get("http://flakyurl")
break
except requests.RequestException as re:
if attempt == tries - 1:
raise re
else:
print("Retrying...")
This pattern retries up to the tries limit before allowing the exception to bubble up.
Handling errors gracefully ensures your application remains available despite upstream failures. Requests error handling combined with try/except gives you the tools to make resilient applications.