Foodpanda Data Scraping: Menus, Prices, and Delivery Zones
Foodpanda, owned by Delivery Hero, operates across multiple Southeast Asian markets including Singapore, Malaysia, Thailand, the Philippines, Myanmar, Cambodia, and Laos. The platform offers a wealth of structured data on restaurant listings, menu items, pricing, delivery zones, and customer reviews that is valuable for market research, competitive analysis, and business intelligence.
This guide provides a comprehensive approach to scraping Foodpanda data across its SEA markets, covering technical setup, data extraction methods, and practical applications.
Foodpanda’s Data Structure
What Makes Foodpanda Data Valuable
Foodpanda’s data is uniquely valuable for several reasons:
- Multi-market presence: Consistent data structure across different countries enables cross-market analysis
- Detailed delivery zones: Foodpanda defines precise delivery areas with associated fees
- Rich restaurant metadata: Operating hours, preparation times, minimum orders, and promotional tags
- Pandamart integration: Grocery and convenience store data alongside restaurant data
Data Categories
| Category | Fields Available | Business Value |
|---|---|---|
| Restaurant Profiles | Name, address, coordinates, cuisine, hours, chain info | Market mapping |
| Menu Items | Name, price, description, category, availability, photos | Price intelligence |
| Delivery Zones | Coverage polygons, fee tiers, estimated times | Logistics planning |
| Promotions | Voucher codes, discount types, validity periods | Promotion tracking |
| Reviews | Rating, count, individual reviews, reply rates | Quality benchmarking |
| Pandamart | Products, pricing, stock status, categories | Grocery intelligence |
Technical Architecture
Foodpanda Web Platform
Foodpanda’s web presence (foodpanda.sg, foodpanda.my, foodpanda.co.th, etc.) is a modern single-page application that loads data through API endpoints. The web version is more accessible for scraping than the mobile app.
Key API patterns:
GET /api/v5/vendors?latitude={lat}&longitude={lng}&limit=48
GET /api/v5/vendors/{vendor_id}
GET /api/v5/vendors/{vendor_id}/menu
GET /api/v5/vendors/{vendor_id}/reviewsAnti-Bot Protections
Foodpanda employs several detection mechanisms:
- Akamai Bot Manager: Enterprise-grade bot detection
- Rate limiting: Per-IP and per-session request limits
- Device fingerprinting: Canvas, WebGL, and browser property analysis
- Geographic validation: IP location must match requested content region
Setting Up for Foodpanda Scraping
Proxy Requirements
Foodpanda’s Akamai protection is particularly effective at identifying datacenter and low-quality residential proxies. Mobile proxies are strongly recommended because:
- Mobile IPs carry high trust scores with Akamai
- Mobile carrier NAT means shared IPs are normal and cannot be easily blacklisted
- Location targeting ensures your IP matches the country-specific Foodpanda domain
import requests
from urllib.parse import urljoin
class FoodpandaScraper:
COUNTRY_DOMAINS = {
"SG": "https://www.foodpanda.sg",
"MY": "https://www.foodpanda.my",
"TH": "https://www.foodpanda.co.th",
"PH": "https://www.foodpanda.ph",
"MM": "https://www.foodpanda.com.mm",
"KH": "https://www.foodpanda.com.kh",
"LA": "https://www.foodpanda.la"
}
def __init__(self, country="SG", proxy_user="", proxy_pass=""):
self.country = country
self.base_url = self.COUNTRY_DOMAINS.get(country, self.COUNTRY_DOMAINS["SG"])
self.session = requests.Session()
# DataResearchTools mobile proxy
proxy_host = f"{country.lower()}-mobile.dataresearchtools.com"
self.session.proxies = {
"http": f"http://{proxy_user}:{proxy_pass}@{proxy_host}:8080",
"https": f"http://{proxy_user}:{proxy_pass}@{proxy_host}:8080"
}
self.session.headers.update({
"User-Agent": "Mozilla/5.0 (Linux; Android 14; Pixel 8) AppleWebKit/537.36 "
"(KHTML, like Gecko) Chrome/120.0.0.0 Mobile Safari/537.36",
"Accept": "application/json, text/plain, */*",
"Accept-Language": "en-US,en;q=0.9",
"X-FP-API-KEY": "volo",
"Referer": self.base_url
})
def _initialize_session(self):
"""Load the main page to obtain necessary cookies."""
self.session.get(self.base_url)
time.sleep(random.uniform(1, 3))Scraping Restaurant Listings
Discovery by Location
def get_vendors(self, latitude, longitude, offset=0, limit=48):
"""Fetch restaurant listings for a location."""
params = {
"latitude": latitude,
"longitude": longitude,
"offset": offset,
"limit": limit,
"include": "characteristics",
"dynamic_pricing": 0,
"configuration": "Edesde",
"country": self.country.lower()
}
response = self.session.get(
f"{self.base_url}/api/v5/vendors",
params=params
)
if response.status_code == 200:
data = response.json()
return data.get("data", {}).get("items", [])
return []
def scrape_area(self, latitude, longitude):
"""Scrape all vendors in an area with pagination."""
all_vendors = []
offset = 0
limit = 48
while True:
vendors = self.get_vendors(latitude, longitude, offset, limit)
if not vendors:
break
all_vendors.extend(vendors)
offset += limit
if len(vendors) < limit:
break
time.sleep(random.uniform(2, 4))
return all_vendorsParsing Restaurant Data
def parse_vendor(self, vendor_data):
"""Extract structured data from a vendor response."""
return {
"id": vendor_data.get("id"),
"code": vendor_data.get("code"),
"name": vendor_data.get("name"),
"address": vendor_data.get("address"),
"latitude": vendor_data.get("latitude"),
"longitude": vendor_data.get("longitude"),
"city": vendor_data.get("city", {}).get("name"),
"cuisines": [c.get("name") for c in vendor_data.get("cuisines", [])],
"rating": vendor_data.get("rating"),
"review_count": vendor_data.get("review_number"),
"minimum_order": vendor_data.get("minimum_order_amount"),
"delivery_fee": vendor_data.get("delivery_fee", {}).get("value"),
"delivery_time": vendor_data.get("delivery_time"),
"is_active": vendor_data.get("is_active"),
"is_new": vendor_data.get("is_new"),
"has_online_payment": vendor_data.get("accepts_online_payment"),
"budget": vendor_data.get("budget"),
"chain": vendor_data.get("chain", {}).get("name") if vendor_data.get("chain") else None,
"characteristics": [
c.get("name") for c in vendor_data.get("characteristics", [])
]
}Scraping Menu Data
Full Menu Extraction
def get_vendor_menu(self, vendor_code):
"""Fetch the complete menu for a vendor."""
response = self.session.get(
f"{self.base_url}/api/v5/vendors/{vendor_code}/menu"
)
if response.status_code != 200:
return None
data = response.json()
menu_categories = data.get("data", {}).get("menus", [])
parsed_menu = []
for menu in menu_categories:
for category in menu.get("menu_categories", []):
category_name = category.get("name", "")
for product in category.get("products", []):
item = {
"vendor_code": vendor_code,
"category": category_name,
"name": product.get("name"),
"description": product.get("description", ""),
"price": product.get("product_variations", [{}])[0].get("price", 0),
"original_price": product.get("product_variations", [{}])[0].get("original_price"),
"is_available": product.get("is_available", True),
"is_popular": product.get("is_popular", False),
"image_url": product.get("file_path", ""),
"variations": self._parse_variations(product),
"toppings": self._parse_toppings(product)
}
# Calculate discount if applicable
if item["original_price"] and item["original_price"] > item["price"]:
item["discount_percent"] = round(
(1 - item["price"] / item["original_price"]) * 100, 1
)
else:
item["discount_percent"] = 0
parsed_menu.append(item)
return parsed_menu
def _parse_variations(self, product):
"""Parse product size and variation options."""
variations = []
for v in product.get("product_variations", []):
variations.append({
"name": v.get("name", "Standard"),
"price": v.get("price", 0),
"code": v.get("code")
})
return variations
def _parse_toppings(self, product):
"""Parse topping and modifier groups."""
toppings = []
for group in product.get("topping_groups", []):
toppings.append({
"group_name": group.get("name"),
"required": group.get("quantity_minimum", 0) > 0,
"max_selections": group.get("quantity_maximum"),
"options": [
{"name": t.get("name"), "price": t.get("price", 0)}
for t in group.get("toppings", [])
]
})
return toppingsScraping Delivery Zone Data
Understanding Delivery Zones
Foodpanda organizes delivery coverage into zones with different fee structures. This data is critical for logistics planning and market coverage analysis.
def get_delivery_info(self, vendor_code, delivery_lat, delivery_lng):
"""Get delivery fee and time estimate for a specific delivery address."""
params = {
"latitude": delivery_lat,
"longitude": delivery_lng,
"vendor_code": vendor_code
}
response = self.session.get(
f"{self.base_url}/api/v5/vendors/{vendor_code}/delivery-fee",
params=params
)
if response.status_code == 200:
data = response.json()
return {
"delivery_fee": data.get("delivery_fee"),
"delivery_fee_type": data.get("delivery_fee_type"),
"minimum_order": data.get("minimum_order_amount"),
"estimated_delivery_time": data.get("delivery_time"),
"is_in_delivery_zone": True
}
return {"is_in_delivery_zone": False}
def map_delivery_coverage(self, vendor_code, center_lat, center_lng, radius_km=5):
"""Map the delivery coverage area for a vendor."""
coverage_points = []
step = 0.002 # Approximately 200m
lat_range = radius_km / 111 # 1 degree latitude ~ 111km
lng_range = radius_km / (111 * abs(math.cos(math.radians(center_lat))))
lat = center_lat - lat_range
while lat <= center_lat + lat_range:
lng = center_lng - lng_range
while lng <= center_lng + lng_range:
delivery_info = self.get_delivery_info(vendor_code, lat, lng)
coverage_points.append({
"lat": lat,
"lng": lng,
"in_zone": delivery_info["is_in_delivery_zone"],
"fee": delivery_info.get("delivery_fee"),
"time": delivery_info.get("estimated_delivery_time")
})
lng += step
time.sleep(random.uniform(0.5, 1.5))
lat += step
return coverage_pointsDelivery Fee Analysis
def analyze_delivery_fees(self, vendors, sample_delivery_point):
"""Analyze delivery fee patterns across vendors."""
fee_data = []
for vendor in vendors:
info = self.get_delivery_info(
vendor["code"],
sample_delivery_point[0],
sample_delivery_point[1]
)
if info["is_in_delivery_zone"]:
fee_data.append({
"vendor": vendor["name"],
"cuisine": vendor.get("cuisines", ["Unknown"])[0],
"delivery_fee": info["delivery_fee"],
"min_order": info["minimum_order"],
"estimated_time": info["estimated_delivery_time"],
"distance_km": self._haversine(
vendor["latitude"], vendor["longitude"],
sample_delivery_point[0], sample_delivery_point[1]
)
})
time.sleep(random.uniform(1, 2))
# Calculate statistics
fees = [d["delivery_fee"] for d in fee_data if d["delivery_fee"] is not None]
return {
"vendors_analyzed": len(fee_data),
"avg_delivery_fee": round(sum(fees) / len(fees), 2) if fees else 0,
"min_fee": min(fees) if fees else 0,
"max_fee": max(fees) if fees else 0,
"free_delivery_count": len([f for f in fees if f == 0]),
"details": fee_data
}Multi-Country Scraping
Country-Specific Considerations
Each Foodpanda market has nuances:
| Market | Domain | Currency | Special Features |
|---|---|---|---|
| Singapore | foodpanda.sg | SGD | Pandamart, high delivery fees |
| Malaysia | foodpanda.my | MYR | Halal filtering, wide coverage |
| Thailand | foodpanda.co.th | THB | Thai language content, LINE integration |
| Philippines | foodpanda.ph | PHP | Metro Manila focus, COD payments |
| Myanmar | foodpanda.com.mm | MMK | Limited coverage, cash-heavy |
| Cambodia | foodpanda.com.kh | KHR/USD | Dual currency pricing |
| Laos | foodpanda.la | LAK | Vientiane-centric |
Cross-Market Data Collection
def scrape_across_markets(target_cities):
"""Scrape Foodpanda data across multiple SEA markets."""
results = {}
for country, cities in target_cities.items():
scraper = FoodpandaScraper(
country=country,
proxy_user="your_user",
proxy_pass="your_pass"
)
scraper._initialize_session()
results[country] = {}
for city_name, coords in cities.items():
vendors = scraper.scrape_area(coords["lat"], coords["lng"])
parsed = [scraper.parse_vendor(v) for v in vendors]
results[country][city_name] = parsed
print(f"{country}/{city_name}: {len(parsed)} vendors found")
time.sleep(random.uniform(5, 10))
return results
# Example usage
target_cities = {
"SG": {
"Central": {"lat": 1.2900, "lng": 103.8500},
"Jurong": {"lat": 1.3329, "lng": 103.7436}
},
"MY": {
"KL Central": {"lat": 3.1390, "lng": 101.6869},
"Petaling Jaya": {"lat": 3.1073, "lng": 101.6067}
},
"TH": {
"Bangkok Central": {"lat": 13.7563, "lng": 100.5018},
"Sukhumvit": {"lat": 13.7310, "lng": 100.5673}
}
}Pandamart Grocery Data
Foodpanda’s Pandamart service offers grocery delivery, extending the platform’s data into retail territory:
def scrape_pandamart(self, latitude, longitude):
"""Scrape Pandamart product listings."""
params = {
"latitude": latitude,
"longitude": longitude,
"vertical": "shop",
"limit": 48
}
response = self.session.get(
f"{self.base_url}/api/v5/vendors",
params=params
)
if response.status_code != 200:
return []
shops = response.json().get("data", {}).get("items", [])
products = []
for shop in shops:
if "pandamart" in shop.get("name", "").lower() or shop.get("vertical") == "shop":
menu = self.get_vendor_menu(shop["code"])
if menu:
for item in menu:
item["shop_name"] = shop.get("name")
item["shop_type"] = "pandamart"
products.append(item)
time.sleep(random.uniform(2, 4))
return productsData Storage and Export
Structured Storage
import sqlite3
import json
def create_foodpanda_database(db_path="foodpanda_data.db"):
"""Create database schema for Foodpanda data."""
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
cursor.executescript("""
CREATE TABLE IF NOT EXISTS vendors (
id INTEGER PRIMARY KEY,
code TEXT UNIQUE,
name TEXT,
address TEXT,
latitude REAL,
longitude REAL,
city TEXT,
country TEXT,
cuisines TEXT,
rating REAL,
review_count INTEGER,
minimum_order REAL,
delivery_fee REAL,
delivery_time INTEGER,
is_chain INTEGER,
chain_name TEXT,
scraped_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS menu_items (
id INTEGER PRIMARY KEY AUTOINCREMENT,
vendor_code TEXT,
category TEXT,
name TEXT,
description TEXT,
price REAL,
original_price REAL,
discount_percent REAL,
is_available INTEGER,
is_popular INTEGER,
scraped_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (vendor_code) REFERENCES vendors(code)
);
CREATE TABLE IF NOT EXISTS delivery_zones (
id INTEGER PRIMARY KEY AUTOINCREMENT,
vendor_code TEXT,
latitude REAL,
longitude REAL,
in_zone INTEGER,
delivery_fee REAL,
estimated_time INTEGER,
checked_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (vendor_code) REFERENCES vendors(code)
);
""")
conn.commit()
return connPractical Applications
1. Market Entry Research
Use Foodpanda data to evaluate new market opportunities:
- Count competitors by cuisine type in target delivery zones
- Analyze average pricing for your food category
- Assess delivery fee structures and minimum orders
- Identify gaps in cuisine coverage
2. Franchise Performance Monitoring
Multi-unit F&B brands can monitor franchise locations:
- Track menu consistency across outlets
- Verify pricing compliance
- Compare ratings between locations
- Monitor delivery times and availability
3. Delivery Zone Optimization
For restaurants already on Foodpanda:
- Map competitor delivery coverage
- Identify underserved areas within delivery range
- Optimize kitchen location for maximum coverage
- Track delivery fee changes from competitors
Conclusion
Foodpanda’s structured data across multiple Southeast Asian markets makes it a prime target for F&B intelligence gathering. The platform’s Akamai-powered bot protection requires quality proxy infrastructure, and DataResearchTools mobile proxies provide the carrier-grade IPs needed for reliable access across all Foodpanda markets.
By combining systematic location-based scraping with proper data storage and analysis tools, businesses can build a comprehensive view of the food delivery landscape across Southeast Asia’s diverse markets. Start with one country, validate your approach, and expand to additional markets using the same infrastructure.
- Best Proxies for Food Delivery Platform Scraping
- How Cloud Kitchens Use Proxies for Competitive Menu Analysis
- How to Scrape AliExpress Product Data Without Getting Blocked
- Amazon Buy Box Monitoring: Proxy Setup for Continuous Tracking
- How Anti-Bot Systems Detect Scrapers (Cloudflare, Akamai, PerimeterX)
- API vs Web Scraping: When You Need Proxies (and When You Don’t)
- Best Proxies for Food Delivery Platform Scraping
- How Cloud Kitchens Use Proxies for Competitive Menu Analysis
- aiohttp + BeautifulSoup: Async Python Scraping
- How to Scrape AliExpress Product Data Without Getting Blocked
- Amazon Buy Box Monitoring: Proxy Setup for Continuous Tracking
- How Anti-Bot Systems Detect Scrapers (Cloudflare, Akamai, PerimeterX)
- Best Proxies for Food Delivery Platform Scraping
- How Cloud Kitchens Use Proxies for Competitive Menu Analysis
- aiohttp + BeautifulSoup: Async Python Scraping
- How to Scrape AliExpress Product Data Without Getting Blocked
- Amazon Buy Box Monitoring: Proxy Setup for Continuous Tracking
- How Anti-Bot Systems Detect Scrapers (Cloudflare, Akamai, PerimeterX)
- Best Proxies for Food Delivery Platform Scraping
- How Cloud Kitchens Use Proxies for Competitive Menu Analysis
- aiohttp + BeautifulSoup: Async Python Scraping
- How to Scrape AliExpress Product Data Without Getting Blocked
- Amazon Buy Box Monitoring: Proxy Setup for Continuous Tracking
- How Anti-Bot Systems Detect Scrapers (Cloudflare, Akamai, PerimeterX)
Related Reading
- Best Proxies for Food Delivery Platform Scraping
- How Cloud Kitchens Use Proxies for Competitive Menu Analysis
- aiohttp + BeautifulSoup: Async Python Scraping
- How to Scrape AliExpress Product Data Without Getting Blocked
- Amazon Buy Box Monitoring: Proxy Setup for Continuous Tracking
- How Anti-Bot Systems Detect Scrapers (Cloudflare, Akamai, PerimeterX)