FlareSolverr: Bypass Cloudflare with Docker – Complete Guide
FlareSolverr is an open-source proxy server that solves Cloudflare challenges automatically. Instead of implementing complex browser automation in your scraper, you send HTTP requests to FlareSolverr’s API, and it handles the JavaScript challenges, cookie generation, and page rendering for you.
This guide covers everything from initial setup to production deployment, including proxy integration, session management, and performance optimization.
What Is FlareSolverr?
FlareSolverr acts as a middleware between your scraper and Cloudflare-protected websites. Under the hood, it runs a headless Chromium browser that:
- Receives your request via its REST API
- Navigates to the target URL with a real browser
- Waits for Cloudflare challenges to resolve
- Returns the HTML content, cookies, and headers
The key advantage is simplicity: your scraper makes standard HTTP requests to FlareSolverr’s API rather than managing browser instances directly.
Architecture Overview
Your Scraper → FlareSolverr API (port 8191) → Headless Browser → Cloudflare → Target Site
↓
Returns HTML + CookiesInstallation
Method 1: Docker (Recommended)
Docker is the simplest way to run FlareSolverr. It handles all dependencies automatically.
# Pull and run FlareSolverr
docker run -d \
--name=flaresolverr \
-p 8191:8191 \
-e LOG_LEVEL=info \
--restart unless-stopped \
ghcr.io/flaresolverr/flaresolverr:latestVerify it’s running:
curl http://localhost:8191/v1 \
-H "Content-Type: application/json" \
-d '{"cmd": "sessions.list"}'Expected response:
{
"status": "ok",
"message": "",
"sessions": []
}Method 2: Docker Compose
For production setups, use Docker Compose:
version: "3.8"
services:
flaresolverr:
image: ghcr.io/flaresolverr/flaresolverr:latest
container_name: flaresolverr
environment:
- LOG_LEVEL=info
- LOG_HTML=false
- CAPTCHA_SOLVER=none
- TZ=UTC
- LANG=en_US
- HEADLESS=true
- BROWSER_TIMEOUT=40000
- TEST_URL=https://www.google.com
ports:
- "8191:8191"
restart: unless-stopped
deploy:
resources:
limits:
memory: 2G
cpus: '2.0'docker-compose up -dMethod 3: From Source (Advanced)
git clone https://github.com/FlareSolverr/FlareSolverr.git
cd FlareSolverr
# Install dependencies
pip install -r requirements.txt
# Install Chrome
# On Ubuntu:
apt-get install -y chromium-browser
# Run
python -m flaresolverrEnvironment Variables
| Variable | Default | Description |
|---|---|---|
LOG_LEVEL | info | Logging verbosity: debug, info, warn, error |
LOG_HTML | false | Log the HTML content of responses |
CAPTCHA_SOLVER | none | CAPTCHA service: hcaptcha, none |
TZ | UTC | Timezone for the browser |
LANG | none | Browser language |
HEADLESS | true | Run browser in headless mode |
BROWSER_TIMEOUT | 40000 | Max time (ms) to wait for challenges |
TEST_URL | https://www.google.com | URL to test browser on startup |
Basic API Usage
GET Request
import requests
FLARESOLVERR_URL = "http://localhost:8191/v1"
def flaresolverr_get(url, max_timeout=60000):
payload = {
"cmd": "request.get",
"url": url,
"maxTimeout": max_timeout
}
response = requests.post(FLARESOLVERR_URL, json=payload)
result = response.json()
if result["status"] == "ok":
solution = result["solution"]
return {
"status_code": solution["status"],
"html": solution["response"],
"cookies": solution["cookies"],
"user_agent": solution["userAgent"],
"url": solution["url"] # Final URL after redirects
}
else:
raise Exception(f"FlareSolverr error: {result.get('message')}")
# Usage
result = flaresolverr_get("https://cloudflare-protected-site.com")
print(f"Status: {result['status_code']}")
print(f"HTML length: {len(result['html'])}")
print(f"Cookies: {[c['name'] for c in result['cookies']]}")POST Request
def flaresolverr_post(url, post_data, max_timeout=60000):
payload = {
"cmd": "request.post",
"url": url,
"postData": post_data,
"maxTimeout": max_timeout
}
response = requests.post(FLARESOLVERR_URL, json=payload)
result = response.json()
if result["status"] == "ok":
return result["solution"]
else:
raise Exception(f"Error: {result.get('message')}")
# Example: Login form
result = flaresolverr_post(
"https://target-site.com/login",
"username=user&password=pass"
)Custom Headers
payload = {
"cmd": "request.get",
"url": "https://target-site.com/api/data",
"maxTimeout": 60000,
"headers": {
"Accept": "application/json",
"X-Requested-With": "XMLHttpRequest",
"Referer": "https://target-site.com/"
}
}
response = requests.post(FLARESOLVERR_URL, json=payload)Session Management
Sessions allow you to reuse the same browser instance across multiple requests, maintaining cookies and state.
Creating a Session
def create_session(session_id=None):
payload = {
"cmd": "sessions.create"
}
if session_id:
payload["session"] = session_id
response = requests.post(FLARESOLVERR_URL, json=payload)
result = response.json()
if result["status"] == "ok":
return result["session"]
raise Exception(f"Error creating session: {result.get('message')}")
session_id = create_session("my-scraper-session")
print(f"Session created: {session_id}")Using a Session
def session_get(url, session_id, max_timeout=30000):
payload = {
"cmd": "request.get",
"url": url,
"session": session_id,
"maxTimeout": max_timeout
}
response = requests.post(FLARESOLVERR_URL, json=payload)
return response.json()
# First request solves the challenge
result1 = session_get("https://target-site.com/", session_id)
# Subsequent requests reuse cookies (much faster)
result2 = session_get("https://target-site.com/page/2", session_id)
result3 = session_get("https://target-site.com/page/3", session_id)Listing and Destroying Sessions
def list_sessions():
payload = {"cmd": "sessions.list"}
response = requests.post(FLARESOLVERR_URL, json=payload)
return response.json()["sessions"]
def destroy_session(session_id):
payload = {
"cmd": "sessions.destroy",
"session": session_id
}
response = requests.post(FLARESOLVERR_URL, json=payload)
return response.json()
# Cleanup
sessions = list_sessions()
for s in sessions:
destroy_session(s)
print(f"Destroyed session: {s}")Proxy Integration
FlareSolverr supports HTTP and SOCKS proxies, which is essential for IP rotation.
Single Proxy
payload = {
"cmd": "request.get",
"url": "https://target-site.com",
"maxTimeout": 60000,
"proxy": {
"url": "http://user:pass@proxy-host:7777"
}
}
response = requests.post(FLARESOLVERR_URL, json=payload)SOCKS5 Proxy
payload = {
"cmd": "request.get",
"url": "https://target-site.com",
"maxTimeout": 60000,
"proxy": {
"url": "socks5://user:pass@proxy-host:1080"
}
}Rotating Proxies
import random
proxy_list = [
"http://user:pass@gate1.provider.com:7777",
"http://user:pass@gate2.provider.com:7778",
"http://user:pass@gate3.provider.com:7779",
]
def fetch_with_proxy_rotation(url):
proxy = random.choice(proxy_list)
payload = {
"cmd": "request.get",
"url": url,
"maxTimeout": 60000,
"proxy": {"url": proxy}
}
response = requests.post(FLARESOLVERR_URL, json=payload)
return response.json()For proxy recommendations, check our proxy provider reviews and best proxy roundups.
Production-Grade Python Client
Here’s a robust FlareSolverr client class suitable for production use:
import requests
import time
import logging
from typing import Optional, Dict, Any
logger = logging.getLogger(__name__)
class FlareSolverrClient:
def __init__(
self,
base_url: str = "http://localhost:8191/v1",
default_timeout: int = 60000,
max_retries: int = 3,
retry_delay: float = 5.0,
proxy: Optional[str] = None
):
self.base_url = base_url
self.default_timeout = default_timeout
self.max_retries = max_retries
self.retry_delay = retry_delay
self.default_proxy = proxy
def _make_request(self, payload: Dict[str, Any]) -> Dict:
for attempt in range(self.max_retries):
try:
response = requests.post(
self.base_url,
json=payload,
timeout=self.default_timeout / 1000 + 10
)
result = response.json()
if result["status"] == "ok":
return result
logger.warning(
f"FlareSolverr returned error: {result.get('message')}"
)
except requests.exceptions.Timeout:
logger.warning(f"Request timeout on attempt {attempt + 1}")
except requests.exceptions.ConnectionError:
logger.error("Cannot connect to FlareSolverr. Is it running?")
except Exception as e:
logger.error(f"Unexpected error: {e}")
if attempt < self.max_retries - 1:
time.sleep(self.retry_delay)
raise Exception(f"Failed after {self.max_retries} attempts")
def get(
self,
url: str,
session: Optional[str] = None,
proxy: Optional[str] = None,
headers: Optional[Dict] = None,
max_timeout: Optional[int] = None
) -> Dict:
payload = {
"cmd": "request.get",
"url": url,
"maxTimeout": max_timeout or self.default_timeout
}
if session:
payload["session"] = session
if proxy or self.default_proxy:
payload["proxy"] = {"url": proxy or self.default_proxy}
if headers:
payload["headers"] = headers
result = self._make_request(payload)
return result["solution"]
def post(
self,
url: str,
data: str,
session: Optional[str] = None,
proxy: Optional[str] = None,
max_timeout: Optional[int] = None
) -> Dict:
payload = {
"cmd": "request.post",
"url": url,
"postData": data,
"maxTimeout": max_timeout or self.default_timeout
}
if session:
payload["session"] = session
if proxy or self.default_proxy:
payload["proxy"] = {"url": proxy or self.default_proxy}
result = self._make_request(payload)
return result["solution"]
def create_session(self, session_id: Optional[str] = None) -> str:
payload = {"cmd": "sessions.create"}
if session_id:
payload["session"] = session_id
result = self._make_request(payload)
return result["session"]
def destroy_session(self, session_id: str) -> None:
payload = {
"cmd": "sessions.destroy",
"session": session_id
}
self._make_request(payload)
# Usage
client = FlareSolverrClient(
proxy="http://user:pass@residential-proxy:7777",
max_retries=3
)
# Create persistent session
session = client.create_session()
# Scrape multiple pages
for page in range(1, 50):
solution = client.get(
f"https://target-site.com/products?page={page}",
session=session
)
html = solution["response"]
# Parse HTML with BeautifulSoup...
time.sleep(2) # Be respectful
# Cleanup
client.destroy_session(session)Performance Optimization
1. Use Sessions for Sequential Scraping
Sessions avoid re-solving Cloudflare challenges on every request. The first request takes 10-15 seconds; subsequent requests within the same session typically take 2-5 seconds.
2. Run Multiple FlareSolverr Instances
For parallel scraping, run multiple instances:
version: "3.8"
services:
flaresolverr-1:
image: ghcr.io/flaresolverr/flaresolverr:latest
ports:
- "8191:8191"
deploy:
resources:
limits:
memory: 1G
flaresolverr-2:
image: ghcr.io/flaresolverr/flaresolverr:latest
ports:
- "8192:8191"
deploy:
resources:
limits:
memory: 1G
flaresolverr-3:
image: ghcr.io/flaresolverr/flaresolverr:latest
ports:
- "8193:8191"
deploy:
resources:
limits:
memory: 1G# Load balance across instances
import random
instances = [
"http://localhost:8191/v1",
"http://localhost:8192/v1",
"http://localhost:8193/v1",
]
def get_instance():
return random.choice(instances)3. Memory Management
FlareSolverr can consume significant memory. Monitor and restart if needed:
# Check memory usage
docker stats flaresolverr
# Auto-restart if memory exceeds limit (via Docker Compose)
deploy:
resources:
limits:
memory: 2G4. Session Cleanup
Orphaned sessions consume memory. Implement automatic cleanup:
import time
class SessionManager:
def __init__(self, client, max_age=300):
self.client = client
self.max_age = max_age
self.sessions = {}
def get_session(self, domain):
now = time.time()
if domain in self.sessions:
session_id, created_at = self.sessions[domain]
if now - created_at < self.max_age:
return session_id
else:
# Session expired, destroy and create new
try:
self.client.destroy_session(session_id)
except:
pass
session_id = self.client.create_session()
self.sessions[domain] = (session_id, now)
return session_id
def cleanup(self):
now = time.time()
expired = [
domain for domain, (_, created)
in self.sessions.items()
if now - created > self.max_age
]
for domain in expired:
session_id, _ = self.sessions.pop(domain)
try:
self.client.destroy_session(session_id)
except:
passIntegration with Scrapy
# scrapy middleware
import requests as http_requests
class FlareSolverrMiddleware:
def __init__(self, flaresolverr_url):
self.flaresolverr_url = flaresolverr_url
@classmethod
def from_crawler(cls, crawler):
return cls(
flaresolverr_url=crawler.settings.get(
'FLARESOLVERR_URL', 'http://localhost:8191/v1'
)
)
def process_request(self, request, spider):
payload = {
"cmd": "request.get",
"url": request.url,
"maxTimeout": 60000
}
response = http_requests.post(self.flaresolverr_url, json=payload)
result = response.json()
if result["status"] == "ok":
from scrapy.http import HtmlResponse
return HtmlResponse(
url=result["solution"]["url"],
body=result["solution"]["response"].encode(),
request=request,
encoding='utf-8'
)Troubleshooting
Challenge Loop (Never Resolves)
Symptom: FlareSolverr times out without solving the challenge.
Solutions:
- Increase
BROWSER_TIMEOUTenvironment variable - Add a proxy (datacenter IPs often fail)
- Check if the site uses Enterprise Bot Management (FlareSolverr may not handle it)
High Memory Usage
Symptom: Container uses 2GB+ RAM.
Solutions:
- Destroy unused sessions regularly
- Restart the container periodically
- Limit concurrent requests
Slow Response Times
Symptom: Each request takes 15+ seconds.
Solutions:
- Use sessions to avoid repeated challenge solving
- Run multiple instances for parallel scraping
- Ensure your Docker host has adequate CPU
Connection Refused
Symptom: Cannot connect to localhost:8191.
Solutions:
- Verify the container is running:
docker ps - Check container logs:
docker logs flaresolverr - Ensure port 8191 is not blocked by a firewall
Limitations
FlareSolverr works well for many Cloudflare-protected sites, but it has limitations:
- Enterprise Bot Management: Sites with Cloudflare’s advanced bot scoring may still block FlareSolverr
- Turnstile: Support for Cloudflare Turnstile is partial and may require additional CAPTCHA solving services
- Speed: Browser-based solving is inherently slower than HTTP-level approaches like
curl_cffi - Resource usage: Each instance consumes 500MB-2GB of RAM
For sites where FlareSolverr falls short, consider combining it with residential proxies or using Undetected ChromeDriver directly. For a broader overview of Cloudflare bypass methods, see our complete Cloudflare bypass guide.
Conclusion
FlareSolverr is an excellent tool for teams that want Cloudflare bypass without the complexity of managing browser automation code. Its Docker-based deployment and REST API make it easy to integrate with any programming language or scraping framework.
For best results, combine FlareSolverr with residential proxies, use sessions for sequential scraping, and implement proper session management to control resource usage. When you hit its limits, fall back to direct browser automation with stealth plugins.
- 403 Forbidden in Web Scraping: How to Fix It
- Best CAPTCHA Solving Services in 2026: Complete Comparison
- Anti-Phishing with Proxies: How Security Teams Use Mobile IPs
- Brand Protection with Proxies: Detect Counterfeit Sellers & Trademark Violations
- How Cybersecurity Teams Use Proxies for Threat Intelligence
- Using Mobile Proxies for Dark Web Monitoring and Research
- 403 Forbidden in Web Scraping: How to Fix It
- Best CAPTCHA Solving Services in 2026: Complete Comparison
- Anti-Phishing with Proxies: How Security Teams Use Mobile IPs
- Brand Protection with Proxies: Detect Counterfeit Sellers & Trademark Violations
- How Cybersecurity Teams Use Proxies for Threat Intelligence
- Using Mobile Proxies for Dark Web Monitoring and Research
- 403 Forbidden in Web Scraping: How to Fix It
- Best CAPTCHA Solving Services in 2026: Complete Comparison
- Anti-Phishing with Proxies: How Security Teams Use Mobile IPs
- Brand Protection with Proxies: Detect Counterfeit Sellers & Trademark Violations
- How Cybersecurity Teams Use Proxies for Threat Intelligence
- Using Mobile Proxies for Dark Web Monitoring and Research
Related Reading
- 403 Forbidden in Web Scraping: How to Fix It
- Best CAPTCHA Solving Services in 2026: Complete Comparison
- Anti-Phishing with Proxies: How Security Teams Use Mobile IPs
- Brand Protection with Proxies: Detect Counterfeit Sellers & Trademark Violations
- How Cybersecurity Teams Use Proxies for Threat Intelligence
- Using Mobile Proxies for Dark Web Monitoring and Research