Just Eat (rebranded Takeaway.com in several markets) serves restaurant and menu data across the UK, Netherlands, Germany, Belgium, and beyond — making it one of the most valuable food-delivery datasets for price intelligence, menu benchmarking, and competitor tracking. Scraping Just Eat menu data in 2026 is feasible but requires handling JavaScript rendering, rotating residential IPs, and rate limits that tighten significantly on repeated restaurant-detail requests.
How Just Eat Serves Its Data
Just Eat runs a React-based frontend. The initial HTML shell is server-rendered, but menu items, prices, and dietary tags load via XHR calls to internal REST endpoints. The two paths worth targeting:
- Restaurant list API:
https://uk.api.just-eat.io/discovery/uk/restaurants/enriched/bypostcode/{postcode}— returns paginated restaurant cards with cuisine type, rating, and minimum order. - Menu detail API:
https://uk.api.just-eat.io/menus/{restaurantSlug}— returns full menu structure including categories, item names, prices in pence, allergens, and modifier groups.
Both endpoints return JSON and are not gated by OAuth in most markets, but they do enforce strict User-Agent validation and fingerprint browser TLS signatures. A bare requests call will get a 403 within a few pages.
The Netherlands and German markets (takeaway.com) share the same API pattern with a different subdomain (api.takeaway.com) and slightly different slug formats. Plan your code to be market-aware from the start.
Setting Up Your Scraper
Python with httpx is the right choice here — it supports HTTP/2, which matches what browsers send and reduces fingerprinting risk. Playwright is only necessary if you need screenshot verification or are hitting the web UI directly.
import httpx, time, random
HEADERS = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 Chrome/124.0 Safari/537.36",
"Accept": "application/json",
"Accept-Language": "en-GB,en;q=0.9",
"x-requested-with": "XMLHttpRequest",
}
def fetch_restaurants(postcode: str, proxy: str) -> dict:
url = f"https://uk.api.just-eat.io/discovery/uk/restaurants/enriched/bypostcode/{postcode}"
with httpx.Client(http2=True, proxies=proxy, timeout=15) as client:
r = client.get(url, headers=HEADERS)
r.raise_for_status()
return r.json()
def fetch_menu(slug: str, proxy: str) -> dict:
url = f"https://uk.api.just-eat.io/menus/{slug}"
with httpx.Client(http2=True, proxies=proxy, timeout=15) as client:
r = client.get(url, headers=HEADERS)
time.sleep(random.uniform(1.5, 3.5))
r.raise_for_status()
return r.json()Jitter between requests matters. Just Eat’s rate limiter counts requests per IP per minute. Keep it under 20 requests/minute per IP and rotate after every 30-50 requests.
Proxy and Anti-Bot Strategy
Just Eat uses Akamai Bot Manager. It checks TLS JA3 fingerprints, cookie behavior, and header order. Standard datacenter proxies fail almost immediately. You need residential or mobile proxies with proper TLS mimicry.
| Proxy Type | Success Rate | Cost / GB | Best For |
|---|---|---|---|
| Datacenter | ~15% | $0.50-1 | Not recommended |
| Residential | ~72% | $3-8 | Bulk postcode crawls |
| Mobile (4G SG/UK) | ~91% | $8-15 | High-value menu detail |
| ISP/Static residential | ~65% | $2-5 | Medium-volume pipelines |
For UK coverage specifically, UK-exit residential IPs are non-negotiable — Just Eat geo-gates API responses and returns empty result sets for non-UK IPs hitting the uk.api.just-eat.io subdomain.
If you are also scraping other food platforms, similar proxy requirements apply. How to Scrape Deliveroo Restaurant Menus UK + EU (2026) covers Cloudflare-specific bypass patterns that are directly portable to Just Eat since both platforms operate heavily in the same UK markets.
Parsing the Menu Response
The menu JSON from Just Eat is well-structured but verbose. A typical restaurant returns 3-8 categories, each with 10-30 items, and each item includes modifier groups (size options, extras) that nest two levels deep.
Key fields to extract per item:
name— display name of the dishprice— price in pence (divide by 100 for GBP)productInformation— description text, often missingallergens— array of allergen strings (gluten, nuts, etc.)modifierGroups[].modifiers[].price— upcharge for each option
Dietary tags (isVegan, isVegetarian, isSpicy) sit at the item level and are reliable enough to use for filtering.
For Asian market equivalents, How to Scrape Foodpanda Menu Data Asia + EU (2026) covers a structurally similar nested modifier schema — the parsing logic is nearly reusable across both platforms.
Scaling Across Postcodes
UK postcodes number around 1.8 million but only ~30,000 are commercially relevant food-delivery zones. Use the ONS postcode database filtered to active delivery postcodes to build your seed list.
A practical crawl architecture:
- Split postcodes into batches of 500
- Run 4-6 parallel workers, each with a dedicated proxy rotation pool
- Deduplicate restaurants by
restaurantIdbefore fetching menus (each postcode overlaps heavily with neighbors) - Store raw JSON to object storage, parse separately
Expect 60,000-80,000 unique UK restaurants with menus populated. Full crawl at 20 req/min per worker across 5 workers takes roughly 18-22 hours for an initial run.
For reservation and dine-in data alongside delivery data, How to Scrape OpenTable Restaurant + Reservation Data (2026) is worth reading — some restaurant chains list on both platforms and cross-referencing gives you richer profiles.
If you need a US food-delivery comparison, How to Scrape DoorDash Restaurant Menus and Pricing (2026) uses a similar GraphQL-over-REST hybrid and the detection evasion patterns overlap. For Southeast Asia, the How to Scrape GrabFood Restaurant and Menu Data guide covers GrabFood’s token-gated endpoints which are more restrictive than Just Eat’s current setup.
Common Errors and Fixes
| Error | Cause | Fix |
|---|---|---|
| 403 on restaurant list | Bad TLS fingerprint or datacenter IP | Switch to residential + use httpx HTTP/2 |
| 429 Too Many Requests | >20 req/min per IP | Add jitter, reduce concurrency |
Empty restaurants array | Non-UK IP on uk subdomain | Use UK-exit proxy |
| 404 on menu slug | Restaurant delisted or slug changed | Re-fetch restaurant list to get current slug |
| Malformed JSON / HTML response | Akamai challenge page served | Rotate IP immediately, back off 60s |
One subtle issue: Just Eat occasionally returns a 200 with an HTML Akamai interstitial instead of JSON. Always validate Content-Type: application/json in the response header before parsing.
Bottom Line
Just Eat menu scraping is well within reach in 2026 if you commit to UK residential proxies, HTTP/2 headers, and proper deduplication across postcodes. The internal API is stable and JSON-native — no Playwright needed for the happy path. DRT covers the full food-delivery scraping stack if you are building a multi-platform price intelligence pipeline, so check the related guides for Deliveroo, DoorDash, and Foodpanda to maximize data coverage without rebuilding your proxy and parsing layer from scratch.