How to Use Proxies with Python Requests in 2026: Complete Guide
Python’s requests library is the most popular HTTP client in the Python ecosystem, with over 30 million weekly downloads. Whether you are building a web scraper, monitoring API endpoints, or collecting data at scale, proxies are essential for avoiding IP bans, bypassing rate limits, and accessing geo-restricted content.
This guide covers everything from basic proxy setup to advanced patterns like rotation, retry decorators, and async alternatives with working code examples.
Why Use Proxies with Python Requests?
Every HTTP request you make with requests.get() reveals your server’s IP address. Problems arise quickly:
- IP bans: Websites block IPs that send too many requests
- Rate limiting: APIs and sites throttle requests from a single IP
- Geo-blocking: Content is restricted based on IP location
- Fingerprinting: Repeated requests from one IP look automated
Proxies route your traffic through intermediate servers, giving you a different IP for each request or session. Use our proxy cost calculator to estimate bandwidth costs before starting.
Prerequisites
- Python 3.9+
- requests library installed
pip install requestsFor SOCKS proxy support:
pip install requests[socks]Or install PySocks directly:
pip install PySocksBasic Proxy Setup
HTTP and HTTPS Proxies
import requests
proxies = {
"http": "http://proxy.example.com:8080",
"https": "http://proxy.example.com:8080"
}
response = requests.get("https://httpbin.org/ip", proxies=proxies)
print(response.json())
# {"origin": "proxy.ip.address"}Key detail: The dictionary keys "http" and "https" specify which protocol the proxy handles. Even if your proxy uses HTTP, you still need the "https" key to proxy HTTPS requests through it.
SOCKS5 Proxy
import requests
proxies = {
"http": "socks5://proxy.example.com:1080",
"https": "socks5://proxy.example.com:1080"
}
response = requests.get("https://httpbin.org/ip", proxies=proxies)
print(response.json())SOCKS5 vs SOCKS5h: Use socks5h:// if you want DNS resolution to happen on the proxy server side (recommended for anonymity):
proxies = {
"http": "socks5h://proxy.example.com:1080",
"https": "socks5h://proxy.example.com:1080"
}Authenticated Proxy
Most commercial proxies require username/password authentication:
import requests
proxies = {
"http": "http://username:password@proxy.example.com:8080",
"https": "http://username:password@proxy.example.com:8080"
}
response = requests.get("https://httpbin.org/ip", proxies=proxies)
print(response.json())Special Characters in Password
If your password contains special characters like @, #, or :, URL-encode them:
from urllib.parse import quote
password = "p@ss:word#123"
encoded_password = quote(password, safe="")
proxies = {
"http": f"http://username:{encoded_password}@proxy.example.com:8080",
"https": f"http://username:{encoded_password}@proxy.example.com:8080"
}Session-Based Proxy Usage
Using a Session object is more efficient because it reuses the underlying TCP connection:
import requests
session = requests.Session()
session.proxies = {
"http": "http://username:password@proxy.example.com:8080",
"https": "http://username:password@proxy.example.com:8080"
}
# All requests through this session use the proxy
response1 = session.get("https://httpbin.org/ip")
response2 = session.get("https://httpbin.org/headers")
response3 = session.get("https://httpbin.org/user-agent")
print(response1.json())
print(response2.json())
print(response3.json())
session.close()Session with Custom Headers
Combine proxies with custom headers for a more realistic request profile:
import requests
session = requests.Session()
session.proxies = {
"http": "http://user:pass@proxy.example.com:8080",
"https": "http://user:pass@proxy.example.com:8080"
}
session.headers.update({
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"Accept-Language": "en-US,en;q=0.9",
"Accept-Encoding": "gzip, deflate, br"
})
response = session.get("https://httpbin.org/headers")
print(response.json())Rotating Proxies with Requests
Simple Round-Robin Rotation
import requests
import itertools
proxy_list = [
"http://user:pass@proxy1.example.com:8080",
"http://user:pass@proxy2.example.com:8080",
"http://user:pass@proxy3.example.com:8080",
"http://user:pass@proxy4.example.com:8080",
"http://user:pass@proxy5.example.com:8080",
]
proxy_cycle = itertools.cycle(proxy_list)
urls = [f"https://example.com/page/{i}" for i in range(20)]
for url in urls:
proxy = next(proxy_cycle)
proxies = {"http": proxy, "https": proxy}
try:
response = requests.get(url, proxies=proxies, timeout=10)
print(f"[{response.status_code}] {url}")
except requests.exceptions.RequestException as e:
print(f"[ERROR] {url}: {e}")Random Rotation
import requests
import random
proxy_list = [
"http://user:pass@proxy1.example.com:8080",
"http://user:pass@proxy2.example.com:8080",
"http://user:pass@proxy3.example.com:8080",
]
def get_random_proxy():
proxy = random.choice(proxy_list)
return {"http": proxy, "https": proxy}
response = requests.get("https://httpbin.org/ip", proxies=get_random_proxy())
print(response.json())Smart Rotation with Health Tracking
import requests
import random
import time
from collections import defaultdict
class ProxyPool:
def __init__(self, proxies):
self.proxies = proxies
self.failures = defaultdict(int)
self.last_used = defaultdict(float)
self.max_failures = 3
self.cooldown = 60 # seconds
def get_proxy(self):
available = [
p for p in self.proxies
if self.failures[p] < self.max_failures
and time.time() - self.last_used[p] > 1 # min 1s between uses
]
if not available:
# Reset failures and try again
self.failures.clear()
available = self.proxies
proxy = random.choice(available)
self.last_used[proxy] = time.time()
return proxy
def report_success(self, proxy):
self.failures[proxy] = max(0, self.failures[proxy] - 1)
def report_failure(self, proxy):
self.failures[proxy] += 1
def get_proxies_dict(self, proxy):
return {"http": proxy, "https": proxy}
# Usage
pool = ProxyPool([
"http://user:pass@proxy1.example.com:8080",
"http://user:pass@proxy2.example.com:8080",
"http://user:pass@proxy3.example.com:8080",
"http://user:pass@proxy4.example.com:8080",
])
urls = [f"https://example.com/page/{i}" for i in range(50)]
for url in urls:
proxy = pool.get_proxy()
try:
response = requests.get(
url,
proxies=pool.get_proxies_dict(proxy),
timeout=10
)
if response.status_code == 200:
pool.report_success(proxy)
print(f"[OK] {url}")
else:
pool.report_failure(proxy)
print(f"[{response.status_code}] {url}")
except requests.exceptions.RequestException:
pool.report_failure(proxy)
print(f"[FAIL] {url} via {proxy}")Handling SSL Certificates with Proxies
Some proxies, especially transparent or corporate proxies, may cause SSL verification issues:
Disable SSL Verification (Not Recommended for Production)
import requests
import urllib3
# Suppress InsecureRequestWarning
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
response = requests.get(
"https://example.com",
proxies={"http": "http://proxy:8080", "https": "http://proxy:8080"},
verify=False
)Use a Custom CA Bundle
response = requests.get(
"https://example.com",
proxies={"http": "http://proxy:8080", "https": "http://proxy:8080"},
verify="/path/to/custom-ca-bundle.crt"
)Pin Certificates per Session
session = requests.Session()
session.verify = "/path/to/ca-bundle.crt"
session.proxies = {
"http": "http://proxy:8080",
"https": "http://proxy:8080"
}
response = session.get("https://example.com")Error Handling and Retry Decorators
Basic Retry with Exponential Backoff
import requests
import time
def fetch_with_retry(url, proxies, max_retries=3, backoff_factor=2):
for attempt in range(max_retries):
try:
response = requests.get(url, proxies=proxies, timeout=10)
response.raise_for_status()
return response
except requests.exceptions.ProxyError as e:
print(f"Proxy error on attempt {attempt + 1}: {e}")
except requests.exceptions.ConnectTimeout:
print(f"Connection timeout on attempt {attempt + 1}")
except requests.exceptions.HTTPError as e:
if e.response.status_code == 429:
print(f"Rate limited on attempt {attempt + 1}")
elif e.response.status_code >= 500:
print(f"Server error on attempt {attempt + 1}")
else:
raise # Don't retry client errors like 404
except requests.exceptions.ConnectionError:
print(f"Connection error on attempt {attempt + 1}")
if attempt < max_retries - 1:
sleep_time = backoff_factor ** attempt
print(f"Retrying in {sleep_time}s...")
time.sleep(sleep_time)
raise Exception(f"All {max_retries} attempts failed for {url}")Using requests HTTPAdapter for Built-In Retries
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
session = requests.Session()
retry_strategy = Retry(
total=3,
backoff_factor=1,
status_forcelist=[429, 500, 502, 503, 504],
allowed_methods=["GET", "HEAD"]
)
adapter = HTTPAdapter(max_retries=retry_strategy)
session.mount("http://", adapter)
session.mount("https://", adapter)
session.proxies = {
"http": "http://user:pass@proxy.example.com:8080",
"https": "http://user:pass@proxy.example.com:8080"
}
response = session.get("https://httpbin.org/ip")
print(response.json())Retry Decorator with Proxy Rotation
import requests
import functools
import random
import time
def retry_with_proxy_rotation(proxy_list, max_retries=3, backoff=1):
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
proxies_to_try = random.sample(proxy_list, min(max_retries, len(proxy_list)))
for attempt, proxy in enumerate(proxies_to_try):
kwargs["proxies"] = {"http": proxy, "https": proxy}
try:
return func(*args, **kwargs)
except (requests.exceptions.RequestException, Exception) as e:
print(f"Attempt {attempt + 1} with {proxy}: {e}")
if attempt < len(proxies_to_try) - 1:
time.sleep(backoff * (attempt + 1))
raise Exception("All proxy attempts exhausted")
return wrapper
return decorator
# Usage
PROXIES = [
"http://user:pass@proxy1.example.com:8080",
"http://user:pass@proxy2.example.com:8080",
"http://user:pass@proxy3.example.com:8080",
]
@retry_with_proxy_rotation(PROXIES, max_retries=3)
def fetch_page(url, proxies=None):
response = requests.get(url, proxies=proxies, timeout=10)
response.raise_for_status()
return response.text
html = fetch_page("https://example.com")
print(f"Got {len(html)} bytes")Environment Variables for Proxy Configuration
You can set proxies globally using environment variables:
export HTTP_PROXY="http://user:pass@proxy.example.com:8080"
export HTTPS_PROXY="http://user:pass@proxy.example.com:8080"
export NO_PROXY="localhost,127.0.0.1,.internal.com"Python requests reads these automatically:
import requests
# No proxies parameter needed - reads from environment
response = requests.get("https://httpbin.org/ip")
print(response.json())To override environment proxies for a specific request:
# Bypass proxy for this request
response = requests.get("https://httpbin.org/ip", proxies={"http": None, "https": None})Async Alternatives with Proxies
For high-concurrency scraping, synchronous requests is a bottleneck. Here are async alternatives.
aiohttp with Proxy
pip install aiohttp aiohttp-socksimport aiohttp
import asyncio
async def fetch(url, proxy):
async with aiohttp.ClientSession() as session:
async with session.get(url, proxy=proxy) as response:
data = await response.json()
return data
async def main():
proxy = "http://user:pass@proxy.example.com:8080"
urls = [f"https://httpbin.org/anything/{i}" for i in range(10)]
tasks = [fetch(url, proxy) for url in urls]
results = await asyncio.gather(*tasks, return_exceptions=True)
for url, result in zip(urls, results):
if isinstance(result, Exception):
print(f"[ERROR] {url}: {result}")
else:
print(f"[OK] {url}")
asyncio.run(main())aiohttp with SOCKS5 Proxy
from aiohttp_socks import ProxyConnector
import aiohttp
import asyncio
async def fetch_socks(url):
connector = ProxyConnector.from_url("socks5://user:pass@proxy.example.com:1080")
async with aiohttp.ClientSession(connector=connector) as session:
async with session.get(url) as response:
return await response.json()
result = asyncio.run(fetch_socks("https://httpbin.org/ip"))
print(result)httpx with Proxy
pip install httpx[socks]import httpx
import asyncio
# Synchronous usage
with httpx.Client(proxy="http://user:pass@proxy.example.com:8080") as client:
response = client.get("https://httpbin.org/ip")
print(response.json())
# Async usage
async def main():
async with httpx.AsyncClient(proxy="http://user:pass@proxy.example.com:8080") as client:
urls = [f"https://httpbin.org/anything/{i}" for i in range(10)]
tasks = [client.get(url) for url in urls]
responses = await asyncio.gather(*tasks)
for r in responses:
print(r.status_code, r.json().get("url"))
asyncio.run(main())httpx with SOCKS5
import httpx
with httpx.Client(proxy="socks5://user:pass@proxy.example.com:1080") as client:
response = client.get("https://httpbin.org/ip")
print(response.json())Async Proxy Rotation with aiohttp
import aiohttp
import asyncio
import random
PROXIES = [
"http://user:pass@proxy1.example.com:8080",
"http://user:pass@proxy2.example.com:8080",
"http://user:pass@proxy3.example.com:8080",
]
async def fetch_with_rotation(session, url, max_retries=3):
for attempt in range(max_retries):
proxy = random.choice(PROXIES)
try:
async with session.get(url, proxy=proxy, timeout=aiohttp.ClientTimeout(total=10)) as response:
if response.status == 200:
return await response.text()
print(f"[{response.status}] {url} via {proxy}")
except Exception as e:
print(f"[ERROR] {url} via {proxy}: {e}")
return None
async def main():
urls = [f"https://example.com/page/{i}" for i in range(50)]
async with aiohttp.ClientSession() as session:
semaphore = asyncio.Semaphore(10) # Limit concurrency
async def bounded_fetch(url):
async with semaphore:
return await fetch_with_rotation(session, url)
results = await asyncio.gather(*[bounded_fetch(url) for url in urls])
success = sum(1 for r in results if r is not None)
print(f"Completed: {success}/{len(urls)} successful")
asyncio.run(main())Troubleshooting Common Issues
requests.exceptions.ProxyError: Cannot connect to proxy
Cause: The proxy server is unreachable.
Fix:
- Verify the proxy is online:
curl -x http://proxy:port https://httpbin.org/ip - Check your firewall allows outbound connections to the proxy port
- Confirm the proxy address and port are correct
requests.exceptions.SSLError with Proxy
Cause: SSL certificate verification fails through the proxy.
Fix: If you trust the proxy, disable verification (development only):
response = requests.get(url, proxies=proxies, verify=False)For production, use the proxy provider’s CA certificate.
407 Proxy Authentication Required
Cause: Credentials are missing or wrong.
Fix: Include credentials in the proxy URL:
proxies = {"http": "http://user:pass@proxy:port", "https": "http://user:pass@proxy:port"}Connection Timeout vs Read Timeout
Cause: Different stages of the request can time out.
Fix: Set both timeouts explicitly:
response = requests.get(url, proxies=proxies, timeout=(5, 30))
# (connect_timeout, read_timeout) in secondsProxy Works for HTTP but Not HTTPS
Cause: The proxy does not support HTTPS CONNECT tunneling.
Fix: Use a proxy that supports HTTPS, or use a SOCKS5 proxy:
proxies = {"http": "socks5h://proxy:1080", "https": "socks5h://proxy:1080"}Summary
Python requests has straightforward proxy support via the proxies parameter. For production use:
- Use Sessions for connection reuse and consistent proxy configuration
- Use SOCKS5h for DNS-level anonymity
- Implement retry logic with exponential backoff using HTTPAdapter or custom decorators
- Use proxy pools with health tracking for reliability
- Switch to httpx or aiohttp for async/concurrent workloads
- Always URL-encode special characters in proxy credentials
Estimate your bandwidth needs with the proxy cost calculator and verify your setup is working as expected with the browser fingerprint tester.
- How to Use Proxies with cURL in 2026: Complete Guide
- How to Use Proxies with Node.js Axios in 2026: Complete Guide
- Best Proxies for Amazon 2026: Complete Guide
- Best Proxies for Discord 2026: Bot Hosting & Account Management
- Best Proxies for eBay 2026: Complete Guide
- Best Proxies for Netflix 2026: Geo-Unblocking & Catalog Access
- How to Use Proxies with cURL in 2026: Complete Guide
- How to Use Proxies with Node.js Axios in 2026: Complete Guide
- Best Proxies for Amazon 2026: Complete Guide
- Best Proxies for Discord 2026: Bot Hosting & Account Management
- Best Proxies for eBay 2026: Complete Guide
- Best Proxies for Netflix 2026: Geo-Unblocking & Catalog Access
- How to Use Proxies with cURL in 2026: Complete Guide
- How to Use Proxies with Node.js Axios in 2026: Complete Guide
- Best Proxies for Amazon 2026: Complete Guide
- Best Proxies for Discord 2026: Bot Hosting & Account Management
- Best Proxies for eBay 2026: Complete Guide
- Best Proxies for Netflix 2026: Geo-Unblocking & Catalog Access
Related Reading
- How to Use Proxies with cURL in 2026: Complete Guide
- How to Use Proxies with Node.js Axios in 2026: Complete Guide
- Best Proxies for Amazon 2026: Complete Guide
- Best Proxies for Discord 2026: Bot Hosting & Account Management
- Best Proxies for eBay 2026: Complete Guide
- Best Proxies for Netflix 2026: Geo-Unblocking & Catalog Access