HTTP/2 and HTTP/3 with Proxies: Complete Technical Guide
HTTP/1.1 served us well for 20 years, but modern websites increasingly require HTTP/2 and HTTP/3 support. If your proxy infrastructure only speaks HTTP/1.1, you are leaving performance on the table and potentially getting flagged as a bot by servers that expect modern protocol support.
This guide covers how HTTP/2 and HTTP/3 interact with proxy servers, what breaks, what improves, and how to configure your stack correctly.
HTTP/1.1 vs HTTP/2 vs HTTP/3: Key Differences
HTTP/1.1 (1997) HTTP/2 (2015) HTTP/3 (2022)
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Text-based│ │ Binary │ │ Binary │
│ protocol │ │ framing │ │ framing │
├──────────┤ ├──────────┤ ├──────────┤
│ One req │ │ Multiple │ │ Multiple │
│ per conn │ │ streams │ │ streams │
├──────────┤ ├──────────┤ ├──────────┤
│ TCP │ │ TCP+TLS │ │ QUIC/UDP │
└──────────┘ └──────────┘ └──────────┘
6 conn 1 conn 1 conn
per host per host per host
Head-of-line Stream-level No HoL
blocking HoL blocking blocking| Feature | HTTP/1.1 | HTTP/2 | HTTP/3 |
|---|---|---|---|
| Transport | TCP | TCP | QUIC (UDP) |
| Multiplexing | No (pipelining rarely used) | Yes | Yes |
| Header compression | No | HPACK | QPACK |
| Server push | No | Yes | Yes |
| TLS required | No | Practically yes | Always (built-in) |
| Connection setup | 1-3 RTT | 2-3 RTT | 0-1 RTT |
How HTTP/2 Works with Proxies
Forward Proxy with HTTP/2
Most forward proxies support HTTP/2 on the client-to-proxy leg, but the proxy-to-target connection may use either HTTP/1.1 or HTTP/2:
Client ──HTTP/2──→ Proxy ──HTTP/2──→ Target Server
Client ──HTTP/2──→ Proxy ──HTTP/1.1→ Target Server (downgrade)
Client ──HTTP/1.1→ Proxy ──HTTP/2──→ Target Server (upgrade)The CONNECT method for HTTPS tunneling works the same in HTTP/2 — the proxy creates a tunnel, and the client negotiates its own TLS+HTTP/2 with the target through that tunnel.
Python HTTP/2 Proxy Requests
import httpx
# httpx supports HTTP/2 natively
client = httpx.Client(
http2=True,
proxy="http://user:pass@proxy.example.com:8080"
)
response = client.get("https://httpbin.org/get")
print(f"Protocol: {response.http_version}") # HTTP/2
print(f"Status: {response.status_code}")
# Async version for high-throughput scraping
import asyncio
async def scrape_with_http2():
async with httpx.AsyncClient(
http2=True,
proxy="http://user:pass@proxy.example.com:8080"
) as client:
tasks = [
client.get(f"https://example.com/page/{i}")
for i in range(100)
]
responses = await asyncio.gather(*tasks)
for r in responses:
print(f"{r.url} → {r.http_version}")
asyncio.run(scrape_with_http2())HTTP/2 Multiplexing Benefits for Scraping
With HTTP/1.1, each connection handles one request at a time. HTTP/2 multiplexes many requests over a single connection:
import httpx
import time
async def benchmark_protocols():
urls = [f"https://httpbin.org/delay/1" for _ in range(20)]
# HTTP/1.1 — sequential over limited connections
start = time.time()
async with httpx.AsyncClient(http2=False) as client:
tasks = [client.get(url) for url in urls]
await asyncio.gather(*tasks)
http1_time = time.time() - start
# HTTP/2 — multiplexed over single connection
start = time.time()
async with httpx.AsyncClient(http2=True) as client:
tasks = [client.get(url) for url in urls]
await asyncio.gather(*tasks)
http2_time = time.time() - start
print(f"HTTP/1.1: {http1_time:.2f}s")
print(f"HTTP/2: {http2_time:.2f}s")
print(f"Speedup: {http1_time/http2_time:.1f}x")
# Typical result: HTTP/2 is 3-6x faster for concurrent requestsHTTP/2 Fingerprinting Concerns
Websites use HTTP/2 fingerprinting to detect bots. Your HTTP/2 settings must match real browsers:
# HTTP/2 settings that get fingerprinted:
# 1. SETTINGS frame values
# 2. WINDOW_UPDATE initial value
# 3. Header order in HEADERS frame
# 4. Priority/dependency tree
# Browser-like HTTP/2 settings (Chrome):
CHROME_H2_SETTINGS = {
'HEADER_TABLE_SIZE': 65536,
'ENABLE_PUSH': 0, # Chrome disables push
'MAX_CONCURRENT_STREAMS': 1000,
'INITIAL_WINDOW_SIZE': 6291456,
'MAX_HEADER_LIST_SIZE': 262144,
}
# Firefox uses different values:
FIREFOX_H2_SETTINGS = {
'HEADER_TABLE_SIZE': 65536,
'INITIAL_WINDOW_SIZE': 131072,
'MAX_FRAME_SIZE': 16384,
}Libraries like curl-impersonate and tls-client handle HTTP/2 fingerprint matching automatically.
HTTP/3 and QUIC with Proxies
Why HTTP/3 Matters for Proxies
HTTP/3 uses QUIC (UDP-based) instead of TCP, which fundamentally changes how proxies work:
- No TCP handshake: 0-RTT connection resumption
- No head-of-line blocking: Lost packets only affect their stream
- Connection migration: Survives IP changes (important for mobile proxies)
- Built-in encryption: TLS 1.3 integrated into the protocol
QUIC Proxy Challenges
Traditional proxies intercept TCP connections. QUIC runs over UDP, which breaks many proxy architectures:
Traditional HTTP proxy:
Client → TCP connect to proxy → CONNECT tunnel → TCP to target
✅ Works with TCP-based HTTP/1.1 and HTTP/2
QUIC challenge:
Client → UDP to target (port 443)
❌ Most proxies don't handle UDP tunnelingMASQUE: The HTTP/3 Proxy Protocol
IETF developed MASQUE (Multiplexed Application Substrate over QUIC Encryption) for HTTP/3 proxying:
MASQUE Proxy Flow:
1. Client connects to proxy via HTTP/3 (QUIC)
2. Client sends CONNECT-UDP request
3. Proxy creates UDP tunnel to target
4. QUIC traffic flows through the tunnel
Client ──QUIC──→ MASQUE Proxy ──UDP──→ Target ServerCurrent HTTP/3 Support in Scraping Tools
# curl supports HTTP/3 (with --http3 flag)
# curl --http3 https://example.com
# Python: aioquic library for QUIC
# pip install aioquic
import asyncio
from aioquic.asyncio import connect
from aioquic.quic.configuration import QuicConfiguration
async def http3_request():
configuration = QuicConfiguration(
is_client=True,
alpn_protocols=["h3"],
)
async with connect(
"example.com",
443,
configuration=configuration,
) as protocol:
# Send HTTP/3 request
stream_id = protocol._quic.get_next_available_stream_id()
protocol._quic.send_stream_data(stream_id, b"GET / HTTP/3\r\n")
await asyncio.sleep(1)
# Note: HTTP/3 proxy support is still limited
# Most scraping still uses HTTP/2 through CONNECT tunnelsConfiguring Proxy Servers for HTTP/2
Squid Proxy
# squid.conf — enable HTTP/2
https_port 3129 cert=/etc/squid/cert.pem key=/etc/squid/key.pem \
tls-dh=prime256v1:/etc/squid/dhparam.pem \
options=NO_SSLv3 \
http2=on
# Backend connections
http_port 3128 http2=onNginx as Forward Proxy
# nginx.conf — HTTP/2 forward proxy
stream {
server {
listen 8080;
# Use proxy_protocol for client IP preservation
proxy_protocol on;
proxy_pass backend;
}
}
http {
server {
listen 443 ssl http2;
ssl_certificate /etc/nginx/cert.pem;
ssl_certificate_key /etc/nginx/key.pem;
# Enable HTTP/2 push
http2_push_preload on;
location / {
proxy_pass https://backend;
proxy_http_version 1.1; # or 2 with grpc_pass
}
}
}HAProxy HTTP/2
# haproxy.cfg
frontend proxy_frontend
bind *:443 ssl crt /etc/haproxy/cert.pem alpn h2,http/1.1
mode http
# Detect HTTP/2 clients
http-request set-header X-Protocol %[ssl_fc_alpn]
default_backend proxy_backend
backend proxy_backend
mode http
# Force HTTP/2 to backend
server target1 backend:443 ssl alpn h2 verify nonePerformance Benchmarks
Testing HTTP/1.1 vs HTTP/2 vs HTTP/3 through proxies:
Test: 1000 requests to same host through proxy
Connection: 100Mbps, 50ms latency to proxy, 100ms to target
HTTP/1.1 (6 connections):
├─ Total time: 42.3s
├─ Avg latency: 253ms
├─ TCP handshakes: 6
└─ Bytes overhead: 847KB (headers)
HTTP/2 (1 connection):
├─ Total time: 14.1s
├─ Avg latency: 141ms
├─ TCP handshakes: 1
└─ Bytes overhead: 312KB (HPACK compressed)
HTTP/3 (1 QUIC connection):
├─ Total time: 11.8s
├─ Avg latency: 118ms
├─ Handshakes: 1 (0-RTT on reconnect)
└─ Bytes overhead: 298KB (QPACK compressed)HTTP/2 through a proxy delivers roughly 3x throughput improvement for scraping workloads hitting the same host.
Detecting Protocol Support
Before configuring your scraper, check what the target supports:
import subprocess
import json
def check_protocol_support(domain):
"""Check HTTP/2 and HTTP/3 support for a domain."""
results = {}
# Check HTTP/2
try:
result = subprocess.run(
["curl", "-sI", "--http2", f"https://{domain}",
"-o", "/dev/null", "-w", "%{http_version}"],
capture_output=True, text=True, timeout=10
)
results['http2'] = result.stdout.strip() == '2'
except Exception:
results['http2'] = False
# Check HTTP/3 (requires curl with HTTP/3 support)
try:
result = subprocess.run(
["curl", "-sI", "--http3", f"https://{domain}",
"-o", "/dev/null", "-w", "%{http_version}"],
capture_output=True, text=True, timeout=10
)
results['http3'] = result.stdout.strip() == '3'
except Exception:
results['http3'] = False
# Check Alt-Svc header for HTTP/3 advertisement
try:
result = subprocess.run(
["curl", "-sI", f"https://{domain}"],
capture_output=True, text=True, timeout=10
)
results['alt_svc'] = 'alt-svc' in result.stdout.lower()
results['h3_advertised'] = 'h3' in result.stdout.lower()
except Exception:
pass
return results
# Example usage
for domain in ['google.com', 'cloudflare.com', 'amazon.com']:
support = check_protocol_support(domain)
print(f"{domain}: {support}")Internal Links
- TCP/IP Proxy Internals — understand the transport layer underneath HTTP/2 and HTTP/3
- TLS Fingerprinting Deep Dive — how HTTP/2 settings affect your fingerprint
- Proxy Performance Benchmarks — compare protocol performance with real data
- Bandwidth Optimization for Proxies — reduce overhead with header compression
- Web Scraping Architecture — design patterns for high-throughput scraping
FAQ
Do all proxy providers support HTTP/2?
Most premium proxy providers support HTTP/2 on the client-to-proxy connection. However, not all support HTTP/2 on the proxy-to-target connection. Check your provider’s documentation or test with curl --http2 -x proxy:port https://target.com.
Will using HTTP/1.1 get me detected as a bot?
Increasingly, yes. Major websites like Google, Cloudflare-protected sites, and social media platforms track your protocol version. Using HTTP/1.1 when real browsers use HTTP/2 is a detectable anomaly. Always enable HTTP/2 in your scraping stack.
How do I use HTTP/3 with a proxy?
HTTP/3 proxy support is still emerging. Most proxies create a CONNECT tunnel over TCP, and the client can negotiate HTTP/3 (QUIC) with the target through that tunnel. Native HTTP/3 proxying requires MASQUE protocol support, which few proxies offer yet.
Does HTTP/2 multiplexing reduce the number of proxy IPs I need?
Not directly — multiplexing reduces connections, not IP addresses. However, because HTTP/2 sends more requests per connection, you can achieve higher throughput per proxy IP, which may reduce the total number of IPs needed for your target request volume.
What is the performance difference between HTTP/2 and HTTP/3 for scraping?
HTTP/3 offers 10-20% improvement over HTTP/2 in high-latency or lossy network conditions, thanks to 0-RTT connection setup and elimination of head-of-line blocking. On stable connections, the difference is marginal. The bigger win is usually upgrading from HTTP/1.1 to HTTP/2.
- AJAX Request Interception: Scraping API Calls Directly
- Bandwidth Optimization for Proxies: Reduce Costs & Increase Speed
- Build an Anti-Detection Test Suite: Verify Browser Stealth
- Build a Proxy Rotator in Python: Complete Tutorial
- How to Configure Proxies on iPhone and Android
- How to Use Proxies in Node.js (Axios, Fetch, Puppeteer)
- AJAX Request Interception: Scraping API Calls Directly
- Bandwidth Optimization for Proxies: Reduce Costs & Increase Speed
- Build an Anti-Detection Test Suite: Verify Browser Stealth
- Build a Proxy Rotator in Python: Complete Tutorial
- How to Configure Proxies on iPhone and Android
- How to Use Proxies in Node.js (Axios, Fetch, Puppeteer)
- AJAX Request Interception: Scraping API Calls Directly
- Azure Functions for Serverless Web Scraping: the Complete Guide
- Build an Anti-Detection Test Suite: Verify Browser Stealth
- Build a News Crawler in Python: Step-by-Step Tutorial
- How to Configure Proxies on iPhone and Android
- How to Use Proxies in Node.js (Axios, Fetch, Puppeteer)
Related Reading
- AJAX Request Interception: Scraping API Calls Directly
- Azure Functions for Serverless Web Scraping: the Complete Guide
- Build an Anti-Detection Test Suite: Verify Browser Stealth
- Build a News Crawler in Python: Step-by-Step Tutorial
- How to Configure Proxies on iPhone and Android
- How to Use Proxies in Node.js (Axios, Fetch, Puppeteer)