Bypassing Akamai Bot Manager with Mobile Proxies
Akamai Bot Manager is the second most common anti-bot platform you will encounter when scraping at scale, protecting major e-commerce sites, airlines, financial institutions, and government portals. It is technically different from Cloudflare in important ways, and strategies that work against one do not always transfer to the other.
Akamai’s detection is built around client-side sensor data collection, server-side behavioral analysis, and IP reputation scoring. The sensor component is particularly sophisticated — it gathers hundreds of data points from the browser and transmits them via encrypted payloads that change structure between versions.
This guide dissects how Akamai Bot Manager works at each detection layer, explains why mobile proxies achieve higher bypass rates than alternatives, and provides practical strategies for building a scraping setup that handles Akamai-protected targets reliably.
How Akamai Bot Manager Works
Akamai’s bot detection is different from Cloudflare in a fundamental way. While Cloudflare primarily uses server-side analysis with optional JavaScript challenges, Akamai relies heavily on a client-side JavaScript sensor that collects extensive data about the browser environment and user behavior.
The Akamai Sensor Script
When you visit an Akamai-protected page, a JavaScript file is loaded (typically from a path like /_sec/cp_challenge/ or embedded in the page). This sensor script collects data and generates an encrypted payload that is sent to Akamai’s servers for analysis.
What the sensor collects:
- Browser properties: navigator object properties (userAgent, platform, language, hardwareConcurrency, deviceMemory), screen dimensions, color depth, plugin list, MIME types
- Canvas fingerprint: The sensor renders specific text and shapes to an HTML5 canvas and hashes the result. Each browser/GPU combination produces a unique hash.
- WebGL fingerprint: Renderer string, vendor string, supported extensions, shader precision
- AudioContext fingerprint: Generates an audio signal and analyzes the processing characteristics, which vary by hardware and browser
- Font detection: Measures rendering dimensions of text in specific fonts to determine which fonts are installed
- Timing data: Performance.now() resolution, timestamp deltas between events, function execution timing
- Event data: Mouse movements, mouse clicks, keyboard events, touch events, scroll events — all with timestamps and coordinates
- Automation detection: Checks for navigator.webdriver, Selenium markers, PhantomJS artifacts, headless Chrome indicators, ChromeDriver presence
The sensor compiles these data points into a structured payload, encrypts it using an algorithm that changes between versions, and submits it as a cookie (typically _abck) or via an API call.
Sensor Data Versions
Akamai regularly updates the sensor script, changing:
- The data collection methods
- The encryption algorithm
- The payload structure
- The obfuscation techniques applied to the JavaScript itself
This means any hardcoded sensor spoofing approach breaks with each update. You cannot write a static “Akamai bypass” that works permanently. The sensor must be executed in a real browser environment, or you need to reverse-engineer each new version — a significant ongoing effort.
Device Fingerprinting Deep Dive
Akamai’s device fingerprinting goes beyond what most anti-bot systems implement. Here is what makes it particularly thorough:
Canvas Fingerprinting:
The sensor renders specific content (including emoji, text with particular fonts and anti-aliasing, and geometric shapes) to a canvas element. The pixel-by-pixel rendering varies based on GPU, driver version, OS rendering engine, and browser. A headless Chrome running on a Linux server with a software renderer produces a canvas hash that differs from Chrome on a real Windows machine with an NVIDIA GPU.
WebGL Fingerprinting:
Beyond the renderer and vendor strings, Akamai checks WebGL parameters like MAX_TEXTURE_SIZE, MAX_VERTEX_ATTRIBS, and shader precision. Virtual machines and containers often expose different WebGL parameters than real hardware.
Behavioral Biometrics:
Mouse movement data is analyzed for natural patterns. Real humans produce curved, slightly irregular mouse paths. Bots produce either no mouse movement (headless) or perfectly linear movements (Selenium moveToElement). The timing between mouse movements, clicks, and page interactions is also profiled.
IP Reputation Scoring
While the sensor is Akamai’s primary detection mechanism, IP reputation provides the baseline risk assessment.
Akamai’s IP evaluation:
- Akamai Intelligent Platform data: Akamai delivers roughly 30% of global web traffic. This gives them an unmatched dataset for IP reputation. They see the same IP across hundreds of thousands of their customer sites.
- ASN classification: Same as Cloudflare — datacenter ASNs start with high risk scores.
- Request patterns: An IP that hits multiple Akamai-protected sites in rapid succession is flagged for bot-like behavior.
- Client reputation: Akamai maintains a dynamic reputation score that updates in real-time based on the IP’s behavior across their network.
Detection Decision Flow
Akamai combines all signals into a bot score:
- Request arrives — IP reputation checked (server-side)
- Page served with sensor script — Client-side data collection begins
- Sensor payload submitted — Server-side analysis of browser fingerprint, behavioral data
- Bot score calculated — Composite score from IP reputation + sensor data + behavioral analysis
- Action taken — Allow, challenge, rate limit, or block based on score thresholds
The site owner configures the action thresholds. Some sites block aggressively (any suspicion = block), while others only block high-confidence bots.
Why Mobile Proxies Score Higher
IP Reputation Layer
Mobile proxies provide the same CGNAT advantage against Akamai as they do against Cloudflare. Mobile carrier IPs are shared by thousands of legitimate users, making it impossible for Akamai to block them without massive false positives.
Akamai’s reputation system scores mobile carrier IPs in the lowest risk tier because:
- The ASN belongs to a legitimate mobile carrier
- The IP has thousands of legitimate users generating normal traffic patterns
- Historical abuse reports are diluted across all users sharing the IP
- Mobile devices are the primary way real humans access the internet
Sensor Data Consistency
Mobile proxies complement good sensor data. When your browser automation presents a realistic device fingerprint on a high-trust IP, the composite bot score stays in the “likely human” range.
The equation is simple: Low IP risk + Realistic sensor data = Low bot score
With datacenter proxies, even perfect sensor data cannot overcome the high IP risk: High IP risk + Realistic sensor data = Medium-high bot score
Traffic Pattern Alignment
Real mobile users access e-commerce sites, airline booking systems, and financial platforms from mobile devices constantly. When your scraping traffic comes from a mobile carrier IP, the traffic pattern matches what Akamai expects from that IP’s ASN.
For a comprehensive comparison of proxy types and their trust scores, see our best proxies for web scraping guide.
Practical Bypass Strategies
Strategy 1: Full Browser Automation with Stealth
This is the most reliable approach. Run a real browser with stealth modifications through a mobile proxy.
from playwright.async_api import async_playwright
import asyncio
import random
async def scrape_akamai_site(url, proxy_config):
async with async_playwright() as p:
browser = await p.chromium.launch(
headless=True,
args=[
'--disable-blink-features=AutomationControlled',
'--disable-features=IsolateOrigins,site-per-process'
]
)
context = await browser.new_context(
proxy=proxy_config,
viewport={'width': 1920, 'height': 1080},
user_agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) '
'AppleWebKit/537.36 (KHTML, like Gecko) '
'Chrome/120.0.0.0 Safari/537.36',
locale='en-SG',
timezone_id='Asia/Singapore',
color_scheme='light'
)
page = await context.new_page()
# Remove automation indicators
await page.add_init_script("""
Object.defineProperty(navigator, 'webdriver', { get: () => undefined });
Object.defineProperty(navigator, 'plugins', {
get: () => [1, 2, 3, 4, 5]
});
Object.defineProperty(navigator, 'languages', {
get: () => ['en-US', 'en']
});
window.chrome = {
runtime: {},
loadTimes: function() {},
csi: function() {},
app: {}
};
""")
# Navigate to the page
response = await page.goto(url, wait_until='networkidle', timeout=60000)
# Simulate human-like behavior to generate sensor data
await simulate_human_behavior(page)
# Wait for sensor to complete
await page.wait_for_timeout(3000)
content = await page.content()
cookies = await context.cookies()
await browser.close()
return content, cookies
async def simulate_human_behavior(page):
"""Generate realistic mouse and scroll events for the Akamai sensor."""
# Random mouse movements
for _ in range(random.randint(3, 7)):
x = random.randint(100, 1800)
y = random.randint(100, 900)
await page.mouse.move(x, y, steps=random.randint(5, 15))
await page.wait_for_timeout(random.randint(100, 500))
# Scroll down slightly
await page.evaluate("window.scrollBy(0, %d)" % random.randint(100, 300))
await page.wait_for_timeout(random.randint(500, 1500))
# Another mouse movement
await page.mouse.move(
random.randint(200, 1600),
random.randint(200, 800),
steps=random.randint(8, 20)
)The simulate_human_behavior function is critical for Akamai. Unlike Cloudflare, which primarily evaluates the JavaScript challenge result, Akamai’s sensor actively monitors ongoing behavioral data. Pages that receive no mouse movement or scroll events generate suspicious sensor payloads.
Strategy 2: Cookie Harvesting and Reuse
Solve the Akamai challenge once in a browser, then reuse the cookies for subsequent HTTP-level requests:
import requests
async def harvest_akamai_cookies(url, proxy_config):
"""Get valid Akamai cookies via browser automation."""
content, cookies = await scrape_akamai_site(url, proxy_config)
# Extract relevant cookies
cookie_dict = {}
for cookie in cookies:
if cookie['name'] in ('_abck', 'bm_sz', 'ak_bmsc', 'bm_sv'):
cookie_dict[cookie['name']] = cookie['value']
return cookie_dict
def scrape_with_cookies(url, cookies, proxy):
"""Use harvested cookies for fast HTTP requests."""
session = requests.Session()
session.cookies.update(cookies)
session.headers.update({
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) '
'AppleWebKit/537.36 (KHTML, like Gecko) '
'Chrome/120.0.0.0 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en-US,en;q=0.9',
'Accept-Encoding': 'gzip, deflate, br',
})
response = session.get(url, proxies=proxy, timeout=15)
return responseImportant caveats with cookie reuse against Akamai:
- Cookies are typically bound to the IP that generated them. Use the same proxy.
- The
_abckcookie has a validity period. Re-harvest before it expires. - Some Akamai implementations require ongoing sensor data submission, making pure cookie reuse insufficient.
Strategy 3: Session Persistence with Periodic Refresh
For long-running scraping operations, maintain a browser session and periodically refresh the sensor:
class AkamaiSessionManager:
def __init__(self, proxy_config):
self.proxy_config = proxy_config
self.browser = None
self.context = None
self.page = None
self.session_start = None
self.max_session_age = 1800 # 30 minutes
async def start(self):
self.playwright = await async_playwright().start()
self.browser = await self.playwright.chromium.launch(headless=True)
await self._new_session()
async def _new_session(self):
if self.context:
await self.context.close()
self.context = await self.browser.new_context(
proxy=self.proxy_config,
viewport={'width': 1920, 'height': 1080},
user_agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) '
'AppleWebKit/537.36 (KHTML, like Gecko) '
'Chrome/120.0.0.0 Safari/537.36'
)
self.page = await self.context.new_page()
self.session_start = time.time()
async def scrape(self, url):
# Check if session needs refresh
if time.time() - self.session_start > self.max_session_age:
await self._new_session()
# Re-solve initial challenge on new session
await self.page.goto(url, wait_until='networkidle')
await simulate_human_behavior(self.page)
await self.page.wait_for_timeout(3000)
await self.page.goto(url, wait_until='networkidle')
return await self.page.content()
async def close(self):
if self.browser:
await self.browser.close()
if self.playwright:
await self.playwright.stop()Browser Automation Requirements
Headless vs. Headed Mode
Akamai’s sensor is more effective at detecting headless browsers than Cloudflare’s JavaScript challenge. The sensor checks:
navigator.webdriverproperty- Chrome DevTools Protocol artifacts
- Headless-specific window properties
- Screen and viewport inconsistencies
Chrome’s --headless=new mode (available since Chrome 112) passes most of these checks because it uses the full browser rendering pipeline. Older headless modes are detected more reliably.
For Puppeteer-specific configuration against Akamai, see our Puppeteer proxy guide. For Playwright, see the Playwright proxy guide.
Mouse and Keyboard Events
The Akamai sensor tracks:
- Mouse move events with coordinates, timestamps, and velocity
- Mouse click events with button type and target element
- Keyboard events with key codes and timing
- Touch events (on mobile User-Agents)
- Scroll events with scroll distance and direction
A page visit with zero mouse events and zero keyboard events generates a sensor payload that scores as “bot” with high confidence. Always include at least basic mouse movement in your automation.
Timing Patterns
The sensor measures:
- Time between page load and first user interaction
- Time between consecutive interactions
- Total time spent on page before submitting forms or clicking links
Bots that navigate instantly to the next page without any interaction time are flagged. Add realistic delays:
# Bad: immediate navigation
await page.goto(url1)
data1 = await page.content()
await page.goto(url2) # Instant, no human would do this
# Good: realistic timing
await page.goto(url1)
await page.wait_for_timeout(random.randint(2000, 5000))
await simulate_human_behavior(page)
data1 = await page.content()
await page.wait_for_timeout(random.randint(1000, 3000))
await page.goto(url2)Success Rate Expectations
By Proxy Type
| Proxy Type | Akamai Bypass Rate (with stealth browser) |
|---|---|
| Datacenter | 5-15% |
| Rotating Residential | 40-60% |
| ISP/Static Residential | 50-65% |
| Mobile (4G/5G) | 80-92% |
Factors Affecting Success
- Akamai configuration level: Basic Bot Manager vs. Advanced Bot Manager (Premier)
- Sensor version: Newer versions are harder to pass with stealth plugins
- Target site’s custom rules: Some sites add extra validation beyond standard Akamai
- Request volume: Lower volume per IP = higher success rate
- Behavioral realism: More realistic mouse/scroll behavior = higher success rate
Akamai vs. Cloudflare Difficulty
Akamai is generally harder to bypass than Cloudflare for two reasons:
- Client-side sensor is more comprehensive. Cloudflare’s JavaScript challenge primarily validates that JavaScript executes correctly. Akamai’s sensor continuously collects behavioral and environmental data.
- Sensor payload encryption changes frequently. You cannot replay or spoof sensor data from a previous version. Each update requires re-evaluation of your bypass approach.
However, mobile proxies remain the most effective proxy type against both platforms for the same underlying reason: CGNAT makes the IPs too valuable to block.
Rate Limiting Considerations
Even with successful bypass, respect Akamai’s rate limiting:
- Per-IP rate limits: Keep requests per proxy IP to 3-5 per minute for aggressive sites, 10-15 for moderate sites
- Per-session limits: Vary the number of pages per session between 10-50
- Time-of-day patterns: Scrape during business hours in the target site’s timezone for more natural traffic patterns
- Request spacing: Add 2-5 second delays between page loads, with occasional longer pauses (10-30 seconds) to simulate reading time
Conclusion
Akamai Bot Manager is a formidable anti-bot platform, but it operates under the same constraint as every other detection system: it cannot block mobile carrier IPs without unacceptable false positive rates. This constraint is your primary advantage.
The bypass strategy for Akamai requires more sophistication than Cloudflare — you need realistic browser automation with behavioral simulation, not just a browser that executes JavaScript. But the foundation remains the same: start with high-trust mobile proxy IPs, add stealth browser automation, and maintain realistic behavioral patterns.
DataResearchTools provides mobile proxies on Singapore carrier networks (Singtel, StarHub, M1) that consistently score in the lowest risk tier on Akamai’s IP reputation system. Get started with mobile proxies for your Akamai bypass needs and build on the strongest possible IP foundation.
- How to Bypass Cloudflare with Proxies (Without Getting Blocked)
- CAPTCHA Handling Strategies: Proxies, Solvers, and Prevention
- How Anti-Bot Systems Detect Scrapers (Cloudflare, Akamai, PerimeterX)
- Rate Limiting and Throttling: How to Scrape Without Triggering Blocks
- Proxy Rotation Strategies for Web Scraping: What Actually Works
- How Anti-Detect Browsers Work: Browser Fingerprinting Explained
- How Anti-Bot Systems Detect Scrapers (Cloudflare, Akamai, PerimeterX)
- API vs Web Scraping: When You Need Proxies (and When You Don’t)
- ASEAN Data Protection Laws: A Web Scraping Compliance Matrix
- How to Build an Ethical Web Scraping Policy for Your Company
- How to Scrape Amazon Product Data with Proxies: 2026 Python Guide
- How to Scrape Bing Search Results with Python and Proxies
- How Anti-Bot Systems Detect Scrapers (Cloudflare, Akamai, PerimeterX)
- API vs Web Scraping: When You Need Proxies (and When You Don’t)
- aiohttp + BeautifulSoup: Async Python Scraping
- ASEAN Data Protection Laws: A Web Scraping Compliance Matrix
- Axios + Cheerio: Lightweight Node.js Scraping
- How to Build an Ethical Web Scraping Policy for Your Company
- How Anti-Bot Systems Detect Scrapers (Cloudflare, Akamai, PerimeterX)
- API vs Web Scraping: When You Need Proxies (and When You Don’t)
- aiohttp + BeautifulSoup: Async Python Scraping
- ASEAN Data Protection Laws: A Web Scraping Compliance Matrix
- Axios + Cheerio: Lightweight Node.js Scraping
- How to Build an Ethical Web Scraping Policy for Your Company
- How Anti-Bot Systems Detect Scrapers (Cloudflare, Akamai, PerimeterX)
- API vs Web Scraping: When You Need Proxies (and When You Don’t)
- aiohttp + BeautifulSoup: Async Python Scraping
- ASEAN Data Protection Laws: A Web Scraping Compliance Matrix
- Axios + Cheerio: Lightweight Node.js Scraping
- How to Build an Ethical Web Scraping Policy for Your Company
Related Reading
- How Anti-Bot Systems Detect Scrapers (Cloudflare, Akamai, PerimeterX)
- API vs Web Scraping: When You Need Proxies (and When You Don’t)
- aiohttp + BeautifulSoup: Async Python Scraping
- ASEAN Data Protection Laws: A Web Scraping Compliance Matrix
- Axios + Cheerio: Lightweight Node.js Scraping
- How to Build an Ethical Web Scraping Policy for Your Company