Uber Eats restaurant data is some of the most commercially valuable food-delivery intelligence available in 2026, covering 45+ countries, millions of menu items, and real-time pricing that shifts by delivery zone. If you’re building a competitive pricing tool, a restaurant analytics dashboard, or a market-entry model, scraping Uber Eats at scale is almost certainly on your list — and it’s considerably harder than scraping its peers.
What the Uber Eats Data Model Looks Like
Uber Eats serves its storefront through a GraphQL API (eats.uber.com/graphql) that’s been progressively locked down since late 2024. The core objects you care about:
- Feed — the city/zone-level restaurant list, paginated by
cursor - Store — individual restaurant metadata (name, UUID, rating, category, delivery fee, ETA)
- Menu — nested sections, items, modifiers, and price overrides by zone
Zone matters more than city. A single “New York” market has dozens of delivery polygons, each returning different restaurants and prices. If you’re scraping for competitive pricing (like How to Scrape DoorDash Restaurant Menus and Pricing (2026) covers for DoorDash), you need to pin your requests to specific lat/lon coordinates, not just city slugs.
The Anti-Bot Stack You’re Up Against
Uber Eats runs a layered defense in 2026:
- Fingerprint-based TLS profiling — JA3/JA4 fingerprints are checked; Python
requestswith default TLS settings gets flagged within minutes - Session tokens tied to device fingerprint — the
x-uber-device-uuidandx-csrf-tokenheaders must stay consistent per session - Rate limiting at the zone level — more than ~80 feed requests per IP per hour triggers soft blocks (429 with a
Retry-Afterheader) - Bot score via Perimeterx/Human Security — JavaScript challenge injected on the web, SDK attestation on mobile endpoints
The mobile API (cn-geo.uber.com) is meaningfully easier to work with than the web GraphQL endpoint. Mobile clients use a fixed SDK version with predictable header patterns, and the attestation layer is looser. Most serious scrapers target mobile.
Recommended Stack and Configuration
Use a Python async setup with curl_cffi (which impersonates Chrome TLS correctly) or the Playwright stealth plugin if you need browser rendering. For the mobile path, httpx with manually set headers works well.
from curl_cffi.requests import AsyncSession
HEADERS = {
"x-uber-device-uuid": "generated-uuid-v4",
"x-csrf-token": "token-from-session-init",
"content-type": "application/json",
"user-agent": "UberEats/10.141.10001 (iPhone; iOS 17.4; Scale/3.00)",
"accept-language": "en-US",
}
async def fetch_feed(lat: float, lon: float, cursor: str = "", session=None):
payload = {
"operationName": "GetFeed",
"variables": {"lat": lat, "lng": lon, "cursor": cursor},
}
resp = await session.post(
"https://www.ubereats.com/api/getFeedV1",
json=payload,
headers=HEADERS,
impersonate="chrome120",
)
return resp.json()Rotate sessions, not just IPs. Each session init (/api/getActiveSessionV1) returns a CSRF token bound to that device UUID. A new IP with a stale token fails the same as a flagged IP.
For proxy infrastructure, residential IPs in the target delivery zone are non-negotiable — datacenter ranges are permabanned. Mobile proxies (real SIM-card IPs) get the best pass rates but are expensive at scale. A reasonable split for a 10-city crawl is 80% residential rotation with 20% mobile IPs reserved for session init and high-value store pages.
Scaling Across Cities and Zones
Uber Eats uses H3 geohexagons internally. You don’t need to reverse-engineer their zone boundaries — instead, define a grid of lat/lon points ~1.5 km apart across your target city and deduplicate by storeUUID in the merge step. A medium city like Chicago needs roughly 200-300 seed coordinates to achieve full coverage.
| City | Seed points needed | Unique stores (approx) | Crawl time (10 req/s) |
|---|---|---|---|
| New York | 480 | 14,000+ | ~12 min |
| Chicago | 260 | 6,500 | ~7 min |
| London | 350 | 9,000 | ~9 min |
| Sydney | 180 | 4,200 | ~5 min |
| Singapore | 90 | 2,800 | ~3 min |
Parallelism ceiling is around 10-12 concurrent requests per residential IP before you hit soft rate limits. With a pool of 50 rotating IPs you can sustain ~500 req/s comfortably, which covers a major city in under 5 minutes. For multi-country coverage similar to Scraping GoFood (Gojek) Restaurant Listings at Scale, you’ll want region-specific proxy pools rather than routing all traffic through a single country.
Storing and Refreshing the Data
Restaurant listings change faster than most people expect. Delivery fees adjust by time of day, menu items go in and out of stock, and ratings shift week to week. A practical refresh schedule:
- Store list (feed-level) — weekly full crawl per city
- Menu and pricing — daily for top 20% of stores by order volume (estimated from rating count)
- ETA and delivery fee — near-real-time if you’re doing dynamic pricing analysis (hourly per zone)
Store your raw JSON in S3 or equivalent, parse into a structured schema separately. Uber Eats schema changes happen without notice — keeping raw responses lets you backfill without re-crawling.
If you’re comparing platforms for a menu intelligence product, How to Scrape Grubhub Menu Data Across Cities (2026) and How to Scrape Deliveroo Restaurant Menus UK + EU (2026) cover the respective GraphQL and REST patterns for those platforms — the session management approaches differ significantly. For Southeast Asia expansion, How to Scrape Foodpanda Menu Data Asia + EU (2026) is the closest analog in terms of zone-based delivery structure.
Handling Errors and Blocks
Common failure modes and how to respond:
- 403 on feed requests — CSRF token expired or IP flagged; rotate both IP and session UUID together
- Empty
storesarray with 200 OK — zone boundary miss, not a block; shift seed coordinates by ~500m RATE_LIMIT_EXCEEDEDin response body — back off for 90 seconds minimum before retrying on the same IP- Infinite spinner (Playwright) — Perimeterx challenge injected; switch to Playwright-stealth with a warm cookie jar, or pivot to the mobile endpoint
Log the uber-trace-id response header on every failed request. It doesn’t help you bypass blocks, but it lets you correlate failures to specific sessions and see whether you’re hitting the same edge node repeatedly.
Bottom Line
Uber Eats is one of the harder food-delivery platforms to scrape reliably, but the mobile API path is genuinely workable in 2026 with the right TLS impersonation, session management, and residential proxy rotation. Prioritize per-zone coverage over per-city coverage, keep raw JSON, and build your refresh cadence around data volatility rather than a fixed interval. DRT covers scraping infrastructure for all major food-delivery platforms if you’re building cross-platform intelligence pipelines.