Making HTTP requests is core to many Python applications. The popular requests library makes this easy, but sometimes requests fail unexpectedly. Instead of giving up after the first failure, it's smarter to retry failed requests to improve reliability.
Here are some tips for adding smart retries to Python requests:
Use Exponential Backoff
When retrying, use exponential backoff - gradually increase the wait time between retries. This reduces load on failing servers. Here's sample code:
from time import sleep
MAX_RETRIES = 5
DELAY = 1 # seconds
for i in range(MAX_RETRIES):
try:
response = requests.get(url)
break
except Exception as e:
sleep(DELAY)
DELAY *= 2
This waits 1 second then 2, 4, 8 etc. Customize the backoff factor and max retries as needed.
Handle Specific Exceptions
Some exceptions indicate a retry won't help. Handle these separately:
from requests.exceptions import ConnectionError
try:
response = requests.get(url)
except ConnectionError:
# Retry wouldn't help, re-raise
raise
except Exception as e:
# Retry other failures
sleep(DELAY)
This avoids wasted retries for known-bad failures like connection issues.
Use Retrying Packages
For more advanced retry logic, use a dedicated package like retrying. It handles backoffs, jitter delays and complex retry conditions.
Key Takeaways
Adding smart retries improves reliability at a low cost. Failures happen - with good retry handling your Python requests will bounce back smarter.