A WebDriverException from Selenium can kill a scraping job silently, dump a cryptic stack trace with no clear path forward, or surface hours into a run when retrying is expensive. this guide covers every common cause in 2026, with real fixes ranked by how often they actually appear in production scrapers.
What triggers a WebDriverException
Selenium raises WebDriverException as a catch-all for driver-level failures. the exception message is where the signal lives. the three most common root causes in active scrapers right now are:
- driver/browser version mismatch (ChromeDriver vs Chrome binary)
- the browser process crashing or timing out during page load
- the target site detecting automation and terminating the session
a fourth cause that trips up containerised scrapers: missing system dependencies (fonts, GPU stubs, shared libs) that Chrome needs to start in headless mode.
selenium.common.exceptions.WebDriverException:
Message: unknown error: Chrome failed to start: exited abnormally.
(Driver info: chromedriver=124.0.6367.60, chrome=125.0.6422.141)if your Chrome and ChromeDriver minor versions diverge by more than one release, start there before investigating anything else.
Driver and binary version mismatches
Chrome auto-updates on most hosts. ChromeDriver does not follow unless you manage it explicitly. in 2026 the cleanest solution is selenium-manager (bundled since Selenium 4.6) or undetected-chromedriver, both of which resolve the binary automatically.
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
options = Options()
options.add_argument("--headless=new")
options.add_argument("--no-sandbox")
options.add_argument("--disable-dev-shm-usage")
# selenium-manager auto-downloads matching ChromeDriver
driver = webdriver.Chrome(options=options)for Docker environments, pin both the Chrome and ChromeDriver versions in your Dockerfile using a versioned base image (selenium/standalone-chrome:125.0) rather than latest. this eliminates version drift entirely.
Timeout-related WebDriverExceptions
WebDriverException: timeout and WebDriverException: unknown error: net::ERR_CONNECTION_TIMED_OUT are distinct but related. the first is Selenium’s own command timeout; the second is Chrome reporting a network failure. if you’re seeing headless Chrome hang before the exception appears, the deeper causes are documented in Why Your Headless Chrome Times Out: Common Causes and Fixes (2026).
for Playwright users hitting the same network-level freezes, Playwright Page.goto Timeouts: Root Causes and Fixes for Scrapers covers the equivalent diagnosis path.
the fix sequence for Selenium timeouts:
- increase
page_load_timeoutandimplicitly_waitonly as a diagnostic step, not a permanent fix - switch
page_load_strategyto"eager"or"none"if you don’t need full page load - check whether the hang is DNS, TCP connect, or TTFB using a plain
requests.getto the same URL - if TTFB is fine but Selenium hangs, suspect a JS render block or a bot check injecting a spinner
driver.set_page_load_timeout(20)
caps = {"pageLoadStrategy": "eager"}Anti-bot detection causing session termination
in 2026, most major sites run fingerprint checks on WebGL, canvas hash, navigator.webdriver, and TLS JA3 fingerprint before serving a response. when the session is killed server-side, Selenium throws WebDriverException: unknown error or loses the window handle entirely.
| detection vector | what it checks | bypass approach |
|---|---|---|
navigator.webdriver | JS property set to true by default | patch via CDP or use undetected-chromedriver |
| TLS JA3 fingerprint | cipher suite order from Chrome binary | use residential proxy with TLS passthrough |
| canvas/WebGL hash | headless Chrome returns different hashes | enable GPU flag or use real-browser service |
| HTTP headers | missing Accept-Language, sec-ch-ua | set full header profile via CDP |
sites that serve a 403 Forbidden before the browser even renders are usually blocking at the WAF/CDN layer based on IP or TLS fingerprint — that requires a residential proxy, not a Selenium patch. if you’re seeing 503 errors mid-session, the site is likely rate-limiting your IP range.
Container and system dependency failures
headless Chrome in Docker fails silently if the container is missing libgbm, libnss3, or font packages. the exception message is usually chrome not reachable or session not created.
checklist for container environments:
- run with
--no-sandboxand--disable-dev-shm-usage(required in most container runtimes) - mount
/dev/shmwith at least 512 MB or pass--shm-size=1gtodocker run - use a Chromium-based base image rather than installing Chrome on top of a bare Ubuntu image
- if running rootless, verify
user-data-dirwrite permissions
for AWS Lambda and similar ephemeral compute, chrome-aws-lambda + chromium npm package is the established pattern. the binary is pre-stripped of GPU dependencies that don’t exist in that environment.
Debugging unknown WebDriverExceptions fast
when the exception message is generic, structured logging catches the actual failure:
import logging
from selenium.common.exceptions import WebDriverException
logging.basicConfig(level=logging.DEBUG)
try:
driver.get(url)
except WebDriverException as e:
print(e.msg) # human-readable
print(e.screen) # base64 screenshot if available
print(e.stacktrace)e.msg strips the boilerplate. e.screen is often enough to tell you whether Chrome rendered at all before the exception.
Bottom line
match your ChromeDriver to your Chrome binary first, then work down the list: timeouts, detection, container deps. most WebDriverException failures in production scrapers fall into one of those four buckets. dataresearchtools.com covers the full scraping stack — if the exception disappears but your data stops coming in, you’ve crossed from a driver problem into an anti-bot or infrastructure problem worth diagnosing separately.