How to Bypass Kasada Anti-Bot Protection in 2026

how to bypass kasada anti-bot protection in 2026

kasada blocks bots by injecting an obfuscated javascript challenge (kpsdk) that fingerprints the browser, runs a proof-of-work, and hides the real content behind a x-kpsdk-ct token. you cannot bypass kasada with plain requests or headless chromium alone. the working approach in 2026 is a real browser (playwright with stealth patches), residential or mobile proxies that match the target’s expected geo, and either a managed bypass api or careful tls/header reproduction.

kasada protects sites like canada goose, nordstrom, hyatt, and a long list of e-commerce and ticketing platforms. it’s known internally as polyform and it ships as kpsdk. compared to akamai bot manager and datadome, kasada is on the harder end of the bot-detection spectrum because it combines proof-of-work, advanced canvas/audio fingerprinting, and aggressive ip reputation scoring.

this guide walks through what kasada is doing on the wire, why naive scrapers fail, and the realistic options for scraping sites behind it without burning budget on dead-ends. if you’ve already read the akamai bypass guide, this will feel familiar but the techniques diverge in important ways.

what kasada actually does

kasada’s protection is layered. the layers are designed so that defeating one without the others still gets you blocked.

layer 1: client-side challenge. when you load a kasada-protected page, the server returns a small html shell with a script tag pointing at /_static/_/v2/kpsdk.js (path varies). that script is heavily obfuscated and runs immediately on page load. it fingerprints your browser using canvas, audio, webgl, navigator properties, font lists, screen metrics, and timing artifacts. it then runs a proof-of-work challenge that takes 100-500ms in a normal browser.

layer 2: token issuance. once the challenge completes, kasada issues two tokens: x-kpsdk-ct (the cryptographic token) and x-kpsdk-cd (the challenge data, which is a long base64 blob). the actual page content is fetched on a second request that includes these headers. without them, you get a 429 or a blank shell.

layer 3: ongoing validation. kasada also runs continuous behavioral checks during the session. mouse movements, scroll patterns, timing between requests, and whether you trigger any of dozens of bot-tells (instant clicks at 0,0 coordinates, programmatic scrolls without inertia, etc).

layer 4: ip and tls reputation. even with valid tokens, requests from datacenter ips or with mismatched tls fingerprints get flagged. kasada works with several commercial ip reputation feeds.

defeat one layer and the others still block you. that’s why “just send the request with these headers” tutorials don’t work past day one.

why headless chromium alone fails

a fresh headless chromium fails kasada for at least three reasons.

first, navigator.webdriver is true. that’s a one-shot bot-tell. every anti-bot vendor checks it.

second, the chromium browser exposes specific properties that real chrome doesn’t, and vice versa. chrome.runtime is missing in headless. window.outerHeight equals window.innerHeight. these get flagged.

third, the tls fingerprint of python’s requests, of node’s fetch, and even of bare playwright differs from real chrome. ja3 and ja4 fingerprints are a known signal kasada uses.

if you run await crawler.arun(url="https://kasada-protected-site.com") with default crawl4ai settings, you’ll get a blank challenge page. same with default playwright. same with selenium-with-undetected-chromedriver out of the box.

the four working approaches in 2026

there are four practical paths. they range from cheapest-but-most-work to most-expensive-but-easiest.

approach 1: managed bypass apis

the easiest path. you send your target url to a service like scrapfly, zyte, or brightdata’s web unlocker. they handle the kasada challenge on their side and return the rendered page. cost is per-request, typically $1-5 per 1000 requests for kasada-protected urls.

scrapfly’s anti-scraping protection bypass:

from scrapfly import ScrapflyClient, ScrapeConfig

client = ScrapflyClient(key="your-scrapfly-key")
result = client.scrape(ScrapeConfig(
    url="https://www.canadagoose.com/some-product",
    asp=True,
    render_js=True,
    country="us",
    proxy_pool="public_residential_pool",
))
print(result.content)

bright data’s web unlocker:

import requests

proxy = "http://brd-customer-XXX-zone-unlocker:password@brd.superproxy.io:33335"
r = requests.get(
    "https://kasada-protected-site.com",
    proxies={"http": proxy, "https": proxy},
    verify=False,
)
print(r.text)

these services hide the bypass logic. they cost real money per page but the success rate is high (90%+ on most kasada targets) and you spend zero engineering time on the cat-and-mouse.

approach 2: real browser plus residential proxy plus stealth

the diy path that mostly works. you run a real chromium via playwright, patch the bot-tells, and route through residential or mobile proxies.

import asyncio
from playwright.async_api import async_playwright

PROXY = {
    "server": "http://your-residential-endpoint:port",
    "username": "user",
    "password": "pass",
}

STEALTH_JS = """
Object.defineProperty(navigator, 'webdriver', {get: () => undefined});
window.chrome = { runtime: {} };
Object.defineProperty(navigator, 'plugins', {get: () => [1, 2, 3, 4, 5]});
Object.defineProperty(navigator, 'languages', {get: () => ['en-US', 'en']});
"""

async def main():
    async with async_playwright() as p:
        browser = await p.chromium.launch(
            headless=False,
            proxy=PROXY,
            args=[
                "--disable-blink-features=AutomationControlled",
                "--disable-features=IsolateOrigins,site-per-process",
            ],
        )
        ctx = await browser.new_context(
            viewport={"width": 1920, "height": 1080},
            user_agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36",
            locale="en-US",
            timezone_id="America/New_York",
        )
        await ctx.add_init_script(STEALTH_JS)
        page = await ctx.new_page()
        await page.goto("https://kasada-protected-site.com", wait_until="networkidle")
        await page.wait_for_timeout(3000)
        html = await page.content()
        await browser.close()
        print(html[:2000])

asyncio.run(main())

key choices in that code:
headless=False is significant. headless chromium has detectable artifacts. headed mode is harder to fingerprint. on a server, run xvfb to fake a display.
– residential or mobile proxies are non-negotiable for kasada. datacenter ips are pre-flagged.
– the stealth init script patches the most obvious bot-tells. it’s not exhaustive. for a fuller stealth bundle, look at the playwright-stealth fork or rebrowser-playwright.
wait_until="networkidle" plus an additional 3-second wait gives the kpsdk script time to issue its tokens before you grab the page.

success rate of this approach: maybe 60-75% on first attempt, depending on the specific kasada deployment and how aggressive the target site has tuned the rules.

approach 3: real browser plus rebrowser-patches plus high-quality proxies

a step up from approach 2. the rebrowser project ships patches that fix several deeper detection vectors that vanilla stealth scripts miss, including the runtime.enable bug that defeats most playwright-stealth setups in 2025-2026.

npm install rebrowser-playwright
const { chromium } = require('rebrowser-playwright');

(async () => {
    const browser = await chromium.launch({
        headless: false,
        proxy: {
            server: 'http://residential.example.com:8080',
            username: 'user',
            password: 'pass',
        },
    });
    const ctx = await browser.newContext({
        viewport: { width: 1920, height: 1080 },
        userAgent: 'Mozilla/5.0 (Macintosh; Intel Mac OS X 14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36',
    });
    const page = await ctx.newPage();
    await page.goto('https://kasada-protected-site.com', { waitUntil: 'networkidle' });
    await page.waitForTimeout(4000);
    console.log((await page.content()).slice(0, 2000));
    await browser.close();
})();

paired with mobile proxies (carrier-grade nat ips that share legitimate user traffic), this approach pushes success rates into the 80-90% range on most kasada deployments. it’s the sweet spot if you have the engineering bandwidth.

approach 4: token harvesting

the advanced and fragile path. you reverse-engineer the kpsdk script, run it in a node-vm or v8 isolate, harvest the x-kpsdk-ct and x-kpsdk-cd tokens, then send them with raw http requests. this is fastest at runtime (no browser overhead) but breaks every time kasada updates the script.

few public tools do this reliably anymore. the kasada bypass libraries from 2023-2024 are mostly dead or paywalled. unless you have a dedicated reverse-engineering team and a tolerance for monthly breakage, skip this approach in 2026.

proxies that work and proxies that don’t

proxy choice is the second-biggest variable after browser realism. against kasada specifically:

  • datacenter proxies: blocked. these are flagged by ip reputation scores before kasada even runs the challenge.
  • shared residential pools (cheap providers): 30-40% success rate. lots of recycled flagged ips.
  • premium residential (bright data, oxylabs, smartproxy): 60-75% success rate.
  • mobile proxies (4g/5g carrier nat): 85-95% success rate. these are the gold standard because thousands of legitimate users share each ip and kasada can’t blocklist them without false-positive issues.

singapore mobile proxy and other dedicated mobile proxy providers tend to outperform bigger residential networks for these tougher targets, simply because the carrier-grade nat structure makes blocking unviable for the protected site.

geo matching matters. if your target is a us retail site, use us residential or us mobile. proxies from indonesia or russia hitting a us-only ecommerce store get extra scrutiny.

for the broader proxy landscape and which providers actually work where, see the residential proxy explainer.

kasada vs akamai vs datadome

featurekasadaakamai bot managerdatadome
client-side js challengeyes (kpsdk)yes (sensor)yes (interstitial)
proof of workyespartialrare
canvas / audio fingerprintaggressiveaggressivemoderate
ip reputation weightingvery highhighhigh
typical bypass cost (managed)$$$$$$$$
diy success rate60-90% (with mobile)50-80%70-85%

kasada is harder to bypass diy than datadome but roughly comparable to akamai. the proof-of-work and the obfuscation depth are what set it apart.

a complete python recipe

putting it together. this is the script i’d actually run for a small kasada scraping job today.

import asyncio
import random
from playwright.async_api import async_playwright

PROXIES = [
    "http://user:pass@mobile-proxy-1.example.com:8000",
    "http://user:pass@mobile-proxy-2.example.com:8000",
]

STEALTH_INIT = """
Object.defineProperty(navigator, 'webdriver', {get: () => undefined});
window.chrome = { runtime: {} };
Object.defineProperty(navigator, 'plugins', {get: () => Array(5).fill(0)});
Object.defineProperty(navigator, 'languages', {get: () => ['en-US', 'en']});
const getParameter = WebGLRenderingContext.prototype.getParameter;
WebGLRenderingContext.prototype.getParameter = function(p) {
    if (p === 37445) return 'Intel Inc.';
    if (p === 37446) return 'Intel Iris OpenGL Engine';
    return getParameter.apply(this, arguments);
};
"""

async def scrape(url):
    proxy_url = random.choice(PROXIES)
    user, pwd_host = proxy_url.replace("http://", "").split("@")
    username, password = user.split(":")
    server = "http://" + pwd_host

    async with async_playwright() as p:
        browser = await p.chromium.launch(
            headless=False,
            proxy={"server": server, "username": username, "password": password},
            args=["--disable-blink-features=AutomationControlled"],
        )
        ctx = await browser.new_context(
            viewport={"width": 1920, "height": 1080},
            user_agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36",
            locale="en-US",
            timezone_id="America/New_York",
        )
        await ctx.add_init_script(STEALTH_INIT)
        page = await ctx.new_page()
        try:
            await page.goto(url, wait_until="networkidle", timeout=45000)
            await page.wait_for_timeout(random.randint(3000, 6000))
            await page.mouse.move(random.randint(100, 800), random.randint(100, 600))
            await page.wait_for_timeout(random.randint(500, 1500))
            html = await page.content()
            return html
        finally:
            await browser.close()

async def main():
    html = await scrape("https://www.example-kasada-site.com/category")
    print(html[:3000])

asyncio.run(main())

things this script does that vanilla setups don’t:
– random mobile proxy per session
– stealth init script with webgl spoofing
– random pause and a real mouse movement before reading the dom
– locale and timezone matched to a us proxy

success rate against typical kasada deployments with this exact script and good mobile proxies: 80%+ in my testing.

error patterns and what they mean

symptomlikely causefix
429 immediatelydatacenter proxy or no proxyswitch to residential or mobile
blank page, no htmlchallenge running but failingcheck stealth init, add wait time
200 with bot interstitialnavigator.webdriver detectedapply stealth patches
works once then 403session burned, ip flaggedrotate proxy, slow request rate
works in headed, fails headlessobvious headless artifactsrun with xvfb on server

most diy attempts fail at the second or third row. the fix is always either better stealth or better proxy.

ethics and legal

scraping sites behind kasada is technically legal in most jurisdictions if you’re collecting public data, respecting robots.txt where it applies, and not violating cfaa or computer misuse acts. the web scraping legal guide covers the nuances.

practically, sites use kasada because they don’t want bots. respect the rate limit. don’t hammer endpoints. if a site has a public api, use that instead. if a sec-or finance-related target has paid feeds, those are usually worth it.

faq

can i bypass kasada with python requests?
no. raw requests cannot run the kpsdk javascript challenge. you need a real browser or a managed bypass api.

does undetected-chromedriver bypass kasada?
sometimes, against older or less-tuned deployments. against current kasada it has a 30-40% success rate at best. rebrowser-patches plus residential proxies do better.

which proxy type works best for kasada?
mobile (4g/5g) carrier-grade nat proxies. residential is acceptable. datacenter is blocked.

how much does it cost to scrape a kasada site?
managed bypass apis charge $1-5 per 1000 requests on kasada targets. diy with mobile proxies costs about $0.01-0.05 per request depending on your provider.

will my code stop working when kasada updates?
managed apis abstract that risk. diy approaches break periodically when kasada ships major sdk updates, typically every few months.

is there an open-source kasada bypass library?
not one that’s actively maintained and works reliably in 2026. the playing field has moved to managed services for the easy path and rebrowser-patches plus mobile proxies for the diy path.

conclusion

kasada is hard but not impossible. the realistic 2026 stack is rebrowser-patched playwright (or chromium with deep stealth init) running headed, behind mobile or premium residential proxies, with proper geo and tls matching. that gets you to 80-90% success on most kasada sites.

if you don’t want to maintain that stack, scrapfly or bright data web unlocker handle it for you at a per-request cost. for production data pipelines that have to hit kasada-protected targets reliably, the managed route is usually cheaper than the engineer hours diy requires.

start with a managed bypass for proof-of-concept. swap in a diy stack once you know the data is worth automating long-term.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top

Resources

Proxy Signals Podcast
Operator-level insights on mobile proxies and access infrastructure.

Multi-Account Proxies: Setup, Types, Tools & Mistakes (2026)