FlareSolverr: Bypass Cloudflare with Docker in 2026
FlareSolverr is an open-source proxy server that solves Cloudflare’s JavaScript challenges using a real browser. You send it a URL, and it returns the page content after handling all the challenge-response flows that block standard HTTP clients.
It’s the go-to solution for automation projects like Sonarr, Radarr, and Jackett, but it works for any scraping pipeline. This guide covers everything from setup to production deployment.
How FlareSolverr Works
FlareSolverr runs a headless Chromium browser inside a Docker container. When you send a request to its API:
- FlareSolverr opens the URL in the browser
- The browser executes Cloudflare’s JavaScript challenge
- After the challenge completes, FlareSolverr extracts the page HTML, cookies, and headers
- It returns everything as a JSON response
This means your scraper never needs to run a browser itself — FlareSolverr acts as a middleware that converts Cloudflare-protected pages into plain HTML.
Installation with Docker
The recommended way to run FlareSolverr is Docker. It handles all browser dependencies automatically.
Basic Docker Run
docker run -d \
--name flaresolverr \
-p 8191:8191 \
-e LOG_LEVEL=info \
-e TZ=UTC \
--restart unless-stopped \
ghcr.io/flaresolverr/flaresolverr:latestDocker Compose
For production setups, use Docker Compose:
version: "3.8"
services:
flaresolverr:
image: ghcr.io/flaresolverr/flaresolverr:latest
container_name: flaresolverr
environment:
- LOG_LEVEL=info
- LOG_HTML=false
- CAPTCHA_SOLVER=none
- TZ=UTC
- HEADLESS=true
- BROWSER_TIMEOUT=40000
- TEST_URL=https://www.google.com
ports:
- "8191:8191"
restart: unless-stoppedEnvironment Variables
| Variable | Default | Description |
|---|---|---|
LOG_LEVEL | info | Logging level: debug, info, warn, error |
LOG_HTML | false | Log full HTML responses (large output) |
CAPTCHA_SOLVER | none | CAPTCHA solving service: hcaptcha-solver, harvester |
HEADLESS | true | Run browser in headless mode |
BROWSER_TIMEOUT | 40000 | Max time (ms) to wait for challenges |
TEST_URL | https://www.google.com | URL used for health checks |
LANG | none | Browser language (e.g., en_US) |
Verify Installation
curl -X POST http://localhost:8191/v1 \
-H "Content-Type: application/json" \
-d '{"cmd": "request.get", "url": "https://www.google.com", "maxTimeout": 30000}'You should get a JSON response with "status": "ok".
API Reference
FlareSolverr exposes a single endpoint at POST /v1.
GET Request
import requests
payload = {
"cmd": "request.get",
"url": "https://target-site.com",
"maxTimeout": 60000
}
response = requests.post(
"http://localhost:8191/v1",
json=payload
)
data = response.json()
if data["status"] == "ok":
html = data["solution"]["response"]
cookies = data["solution"]["cookies"]
user_agent = data["solution"]["userAgent"]
status_code = data["solution"]["status"]
print(f"Status: {status_code}")
print(f"Content length: {len(html)}")
print(f"Cookies: {len(cookies)}")POST Request
For form submissions or API calls:
import requests
payload = {
"cmd": "request.post",
"url": "https://target-site.com/api/search",
"maxTimeout": 60000,
"postData": "query=proxy+providers&page=1",
"headers": {
"Content-Type": "application/x-www-form-urlencoded"
}
}
response = requests.post(
"http://localhost:8191/v1",
json=payload
)
data = response.json()
print(data["solution"]["response"])Session Management
Sessions keep the browser open between requests, preserving cookies and state:
import requests
FLARESOLVERR = "http://localhost:8191/v1"
# Create session
create_resp = requests.post(FLARESOLVERR, json={
"cmd": "sessions.create",
"session": "my_session"
})
print(create_resp.json())
# Use session for requests
resp1 = requests.post(FLARESOLVERR, json={
"cmd": "request.get",
"url": "https://target-site.com/login",
"session": "my_session",
"maxTimeout": 60000
})
# Session retains cookies from first request
resp2 = requests.post(FLARESOLVERR, json={
"cmd": "request.get",
"url": "https://target-site.com/dashboard",
"session": "my_session",
"maxTimeout": 60000
})
# Clean up
requests.post(FLARESOLVERR, json={
"cmd": "sessions.destroy",
"session": "my_session"
})List Active Sessions
response = requests.post(FLARESOLVERR, json={
"cmd": "sessions.list"
})
print(response.json()["sessions"])Adding Proxy Support
FlareSolverr supports HTTP and SOCKS5 proxies. This is critical for scraping at scale — without residential proxies, Cloudflare’s IP reputation system will quickly flag the FlareSolverr container’s datacenter IP.
Per-Request Proxy
payload = {
"cmd": "request.get",
"url": "https://target-site.com",
"maxTimeout": 60000,
"proxy": {
"url": "http://user:pass@residential.example.com:7777"
}
}
response = requests.post(
"http://localhost:8191/v1",
json=payload
)SOCKS5 Proxy
payload = {
"cmd": "request.get",
"url": "https://target-site.com",
"maxTimeout": 60000,
"proxy": {
"url": "socks5://user:pass@socks-proxy.example.com:1080"
}
}Session with Persistent Proxy
# Create session with proxy - all requests in this session use it
requests.post(FLARESOLVERR, json={
"cmd": "sessions.create",
"session": "proxied_session",
"proxy": {
"url": "http://user:pass@residential.example.com:7777"
}
})Building a Scraping Pipeline with FlareSolverr
Here’s a complete example that scrapes multiple pages through FlareSolverr:
import requests
import time
from bs4 import BeautifulSoup
import json
class FlareSolverrScraper:
def __init__(self, base_url="http://localhost:8191/v1", proxy=None):
self.base_url = base_url
self.proxy = proxy
self.session_id = None
def create_session(self):
payload = {"cmd": "sessions.create"}
if self.proxy:
payload["proxy"] = {"url": self.proxy}
resp = requests.post(self.base_url, json=payload).json()
self.session_id = resp.get("session")
return self.session_id
def fetch(self, url, max_timeout=60000):
payload = {
"cmd": "request.get",
"url": url,
"maxTimeout": max_timeout,
}
if self.session_id:
payload["session"] = self.session_id
if self.proxy and not self.session_id:
payload["proxy"] = {"url": self.proxy}
resp = requests.post(self.base_url, json=payload).json()
if resp["status"] == "ok":
return {
"html": resp["solution"]["response"],
"cookies": resp["solution"]["cookies"],
"status": resp["solution"]["status"],
}
else:
raise Exception(f"FlareSolverr error: {resp.get('message')}")
def destroy_session(self):
if self.session_id:
requests.post(self.base_url, json={
"cmd": "sessions.destroy",
"session": self.session_id
})
def __enter__(self):
self.create_session()
return self
def __exit__(self, *args):
self.destroy_session()
# Usage
proxy = "http://user:pass@residential.example.com:7777"
with FlareSolverrScraper(proxy=proxy) as scraper:
urls = [
"https://target-site.com/page/1",
"https://target-site.com/page/2",
"https://target-site.com/page/3",
]
for url in urls:
result = scraper.fetch(url)
soup = BeautifulSoup(result["html"], "html.parser")
title = soup.select_one("h1")
print(f"{url}: {title.text if title else 'No title'}")
time.sleep(2) # Be respectfulTransferring Cookies to Requests
For efficiency, solve the challenge once with FlareSolverr, then use the cookies with a lightweight HTTP client for subsequent requests:
from curl_cffi import requests as curl_requests
import requests
def get_cf_cookies_via_flaresolverr(url, proxy=None):
payload = {
"cmd": "request.get",
"url": url,
"maxTimeout": 60000,
}
if proxy:
payload["proxy"] = {"url": proxy}
resp = requests.post(
"http://localhost:8191/v1", json=payload
).json()
if resp["status"] == "ok":
return (
resp["solution"]["cookies"],
resp["solution"]["userAgent"]
)
raise Exception("Challenge failed")
# Get cookies
cookies, user_agent = get_cf_cookies_via_flaresolverr(
"https://target-site.com",
proxy="http://user:pass@residential.example.com:7777"
)
# Transfer to curl_cffi session
session = curl_requests.Session(impersonate="chrome120")
for cookie in cookies:
session.cookies.set(cookie["name"], cookie["value"])
session.headers["User-Agent"] = user_agent
# Fast scraping without browser overhead
for page in range(1, 50):
resp = session.get(f"https://target-site.com/page/{page}")
print(f"Page {page}: {resp.status_code}")Scaling FlareSolverr
Running Multiple Instances
For concurrent scraping, run multiple FlareSolverr containers:
version: "3.8"
services:
flaresolverr-1:
image: ghcr.io/flaresolverr/flaresolverr:latest
ports:
- "8191:8191"
environment:
- HEADLESS=true
restart: unless-stopped
flaresolverr-2:
image: ghcr.io/flaresolverr/flaresolverr:latest
ports:
- "8192:8191"
environment:
- HEADLESS=true
restart: unless-stopped
flaresolverr-3:
image: ghcr.io/flaresolverr/flaresolverr:latest
ports:
- "8193:8191"
environment:
- HEADLESS=true
restart: unless-stoppedLoad Balancing Across Instances
import random
INSTANCES = [
"http://localhost:8191/v1",
"http://localhost:8192/v1",
"http://localhost:8193/v1",
]
def fetch_balanced(url, proxy=None):
endpoint = random.choice(INSTANCES)
payload = {
"cmd": "request.get",
"url": url,
"maxTimeout": 60000,
}
if proxy:
payload["proxy"] = {"url": proxy}
return requests.post(endpoint, json=payload).json()Troubleshooting
Challenge Loop (Never Resolves)
If FlareSolverr gets stuck in a challenge loop:
- Increase timeout — Set
BROWSER_TIMEOUT=60000andmaxTimeout: 60000 - Add a proxy — Datacenter IPs often get infinite challenges
- Check FlareSolverr version — Older versions may not handle newer Cloudflare challenges
High Memory Usage
Each browser instance uses 200-500MB RAM. Manage this by:
- Destroying sessions when done
- Limiting concurrent sessions
- Using
--restart unless-stoppedto recover from OOM crashes
“Could not find browser revision”
This usually means the Docker image’s bundled Chromium is outdated:
docker pull ghcr.io/flaresolverr/flaresolverr:latest
docker restart flaresolverrTimeout Errors
# Retry logic for flaky challenges
import time
def fetch_with_retry(url, max_retries=3):
for attempt in range(max_retries):
try:
result = fetch_via_flaresolverr(url)
return result
except Exception as e:
print(f"Attempt {attempt + 1} failed: {e}")
time.sleep(5)
raise Exception(f"All {max_retries} attempts failed")FlareSolverr vs Alternatives
| Feature | FlareSolverr | Playwright | Undetected ChromeDriver |
|---|---|---|---|
| Docker deployment | Yes | Manual | Manual |
| API interface | REST | Python/JS | Python |
| Session management | Built-in | Manual | Manual |
| Proxy support | Yes | Yes | Yes |
| Turnstile support | Partial | Better | Better |
| Resource usage | High | High | High |
| Community | Large (arr stack) | Large | Large |
| Best for | Microservice architecture | Direct scripting | Selenium projects |
FAQ
Is FlareSolverr free?
Yes. FlareSolverr is open-source (MIT license) and free to use. Your costs are limited to server resources (a VPS with 2GB RAM is sufficient for a single instance) and proxy fees if you use residential proxies.
Can FlareSolverr solve CAPTCHAs?
FlareSolverr can be configured with external CAPTCHA solvers via the CAPTCHA_SOLVER environment variable. By default, it only handles JavaScript challenges, not visual CAPTCHAs. For CAPTCHA-heavy sites, see our CAPTCHA solving services guide.
How many concurrent requests can FlareSolverr handle?
A single instance handles one request at a time (requests are queued). For concurrency, run multiple instances behind a load balancer. Each instance needs approximately 500MB-1GB RAM.
Does FlareSolverr work with Cloudflare’s enterprise Bot Management?
FlareSolverr struggles with enterprise-level Bot Management, which uses behavioral analysis beyond JavaScript challenges. For enterprise-protected sites, dedicated browser automation with stealth plugins and residential proxies typically work better.
Can I use FlareSolverr with Sonarr/Radarr/Jackett?
Yes — this is one of FlareSolverr’s primary use cases. In Jackett, go to the indexer settings and set the FlareSolverr URL to http://flaresolverr:8191. In Prowlarr, add it under Settings > Indexers > Add FlareSolverr tag.
Conclusion
FlareSolverr is the simplest way to add Cloudflare bypass capability to any application. Its REST API means you can integrate it with any language or framework, and Docker deployment makes it easy to scale. For best results, combine it with residential proxies and consider the cookie transfer approach to minimize browser overhead at scale.
Useful Resources
- FlareSolverr GitHub Repository
- Bypass Cloudflare with Python
- How to Bypass Cloudflare Protection
- IP Rotation Strategies
- 403 Forbidden in Web Scraping: How to Fix It
- Best CAPTCHA Solving Services in 2026: Complete Comparison
- Anti-Phishing with Proxies: How Security Teams Use Mobile IPs
- Brand Protection with Proxies: Detect Counterfeit Sellers & Trademark Violations
- How Cybersecurity Teams Use Proxies for Threat Intelligence
- Using Mobile Proxies for Dark Web Monitoring and Research
- 403 Forbidden in Web Scraping: How to Fix It
- Best CAPTCHA Solving Services in 2026: Complete Comparison
- Anti-Phishing with Proxies: How Security Teams Use Mobile IPs
- Brand Protection with Proxies: Detect Counterfeit Sellers & Trademark Violations
- How Cybersecurity Teams Use Proxies for Threat Intelligence
- Using Mobile Proxies for Dark Web Monitoring and Research
- 403 Forbidden in Web Scraping: How to Fix It
- Best CAPTCHA Solving Services in 2026: Complete Comparison
- Anti-Phishing with Proxies: How Security Teams Use Mobile IPs
- Brand Protection with Proxies: Detect Counterfeit Sellers & Trademark Violations
- How Cybersecurity Teams Use Proxies for Threat Intelligence
- Using Mobile Proxies for Dark Web Monitoring and Research
Related Reading
- 403 Forbidden in Web Scraping: How to Fix It
- Best CAPTCHA Solving Services in 2026: Complete Comparison
- Anti-Phishing with Proxies: How Security Teams Use Mobile IPs
- Brand Protection with Proxies: Detect Counterfeit Sellers & Trademark Violations
- How Cybersecurity Teams Use Proxies for Threat Intelligence
- Using Mobile Proxies for Dark Web Monitoring and Research