how to scrape google local pack results (maps + business data) 2026
google local pack is the 3-result map block that appears on serps for local-intent queries like “coffee shop near me” or “lawyer in austin.” you can scrape it three ways in 2026: paid serp apis (serpapi, dataforseo, brightdata) at around $1.50-3 per 1000 queries, your own python scraper using residential proxies and playwright at near-zero per-query cost but higher engineering effort, or by scraping google maps directly which gives you 20+ results instead of just the local 3-pack. this tutorial covers all three with working code.
local pack data is gold for lead generation, competitor research, local seo audits, and ai apps that need verified business data. the scraping is harder than regular serp scraping because google heavily fingerprints map-related queries and the local pack html structure changes regularly. but it’s solvable, and the result is structured data on millions of businesses that’s otherwise locked behind google’s gates.
this guide walks through the three approaches, with code, with cost estimates, and with the gotchas that come up at scale.
what’s in the local pack
a typical local pack result on a query like “plumber miami” returns:
- 3 business listings (top 3 by google’s local ranking)
- each listing has: business name, rating (1-5 stars), review count, category, address, hours snippet, phone (sometimes), website (sometimes), gbid (google business id), latitude/longitude
- a “view all” link that opens the local finder (top 20 results)
- ad placements above and below sometimes
the underlying data lives in google’s local index, accessible via the regular serp html, the maps web ui, and the maps mobile app. each surface returns slightly different fields. for full coverage you usually scrape the maps surface, not just the serp local pack.
approach 1: paid serp apis (easiest)
three providers dominate this space in 2026:
- serpapi: $50/month for 5000 searches. local pack data included with
engine=google_local. - dataforseo: $0.0006-0.001 per organic search depending on plan. dedicated local pack endpoint.
- brightdata serp api: $1.50 per 1000 searches. covers all serp features including local pack.
for any serious volume the per-query rates push under $1.50/1k. for prototypes and small jobs they are by far the easiest path.
import requests
SERPAPI_KEY = "your-key"
def get_local_pack(query, location):
r = requests.get("https://serpapi.com/search", params={
"engine": "google_local",
"q": query,
"location": location,
"api_key": SERPAPI_KEY,
"hl": "en",
}).json()
return r.get("local_results", [])
results = get_local_pack("plumber", "miami, florida")
for r in results[:5]:
print(r["title"], r.get("rating"), r.get("phone"), r.get("address"))
dataforseo’s local pack endpoint:
import requests
from requests.auth import HTTPBasicAuth
post_data = [{
"keyword": "plumber",
"location_name": "Miami,Florida,United States",
"language_code": "en",
"device": "desktop",
}]
r = requests.post(
"https://api.dataforseo.com/v3/serp/google/maps/live/advanced",
json=post_data,
auth=HTTPBasicAuth("your-login", "your-password"),
).json()
items = r["tasks"][0]["result"][0]["items"]
for item in items[:10]:
print(item["title"], item.get("rating", {}).get("value"))
dataforseo’s pricing is the most aggressive at scale (under $0.001 per query at volume). serpapi has the friendliest sdk and free tier. bright data is the most reliable at very high volume.
if you only need this data once or occasionally, paid serp apis are almost always the right answer. you spend $5-50, you get the data, you move on.
approach 2: python scraper with residential proxies
cheaper at scale, more engineering work upfront. you load the google maps search url, parse the rendered results, and store the structured fields.
import asyncio
import json
import re
from playwright.async_api import async_playwright
PROXY = {
"server": "http://residential.example.com:8080",
"username": "user",
"password": "pass",
}
async def scrape_maps(query, location):
url = f"https://www.google.com/maps/search/{query.replace(' ', '+')}+{location.replace(' ', '+')}"
async with async_playwright() as p:
browser = await p.chromium.launch(headless=False, proxy=PROXY)
ctx = await browser.new_context(
viewport={"width": 1366, "height": 768},
user_agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36",
locale="en-US",
)
page = await ctx.new_page()
await page.goto(url, wait_until="networkidle", timeout=30000)
await page.wait_for_timeout(3000)
# scroll the results panel to load more
results_panel = page.locator("div[role='feed']")
for _ in range(3):
await results_panel.evaluate("el => el.scrollBy(0, 800)")
await page.wait_for_timeout(1500)
# extract listing elements
items = await page.locator("div[role='feed'] > div > div[jsaction]").all()
results = []
for item in items[:20]:
try:
name = await item.locator("div.fontHeadlineSmall").first.text_content()
rating_el = await item.locator("span[role='img'][aria-label*='star']").first.get_attribute("aria-label")
results.append({
"name": name.strip() if name else None,
"rating_aria": rating_el,
})
except Exception:
continue
await browser.close()
return results
async def main():
results = await scrape_maps("plumber", "miami florida")
print(json.dumps(results, indent=2))
asyncio.run(main())
this is the rough shape. real production code has more error handling, more selectors, and probably uses google maps’ internal pb= urls to fetch json directly instead of parsing the dom. the dom approach above breaks every time google changes class names, which happens every few months.
a more robust pattern is to capture the maps json endpoint via network interception:
async def capture_maps_json(query, location):
captured = []
async with async_playwright() as p:
browser = await p.chromium.launch(headless=False)
ctx = await browser.new_context()
page = await ctx.new_page()
async def handle_response(response):
if "search?" in response.url and "/maps/" in response.url:
try:
body = await response.text()
if body.startswith(")]}'"):
body = body[5:]
captured.append(json.loads(body))
except Exception:
pass
page.on("response", handle_response)
await page.goto(f"https://www.google.com/maps/search/{query}+{location}",
wait_until="networkidle")
await page.wait_for_timeout(5000)
await browser.close()
return captured
the captured json contains the full structured data google sees on its end. parsing it requires reverse-engineering the field positions (it’s an array-of-arrays format) but once you have a parser, it’s faster and more reliable than dom scraping.
for the proxy choice, residential is the minimum. mobile proxies pass through the toughest google blocks more reliably. datacenter ips are blocked within a few queries. see the residential proxy guide for context.
approach 3: scrape regular google serp html
if you only need the 3-pack (not the full 20-result local finder), you can scrape the regular google serp page. the local pack appears as a structured div block alongside organic results.
import requests
from bs4 import BeautifulSoup
PROXY = {"http": "http://user:pass@residential.example.com:8080",
"https": "http://user:pass@residential.example.com:8080"}
def scrape_serp_local(query, geo_param):
url = f"https://www.google.com/search?q={query}&uule={geo_param}&hl=en"
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36",
"Accept": "text/html,application/xhtml+xml",
"Accept-Language": "en-US,en;q=0.9",
}
r = requests.get(url, headers=headers, proxies=PROXY, timeout=30)
soup = BeautifulSoup(r.text, "html.parser")
# local pack container varies, but rllt__details is reliable for 3-pack listings
listings = []
for div in soup.select("div.rllt__details"):
name = div.select_one("div.dbg0pd")
if name:
listings.append({
"name": name.get_text(strip=True),
"snippet": " ".join(s.get_text() for s in div.select("div") if s != name),
})
return listings
the uule parameter is google’s encoded location. you generate it from a place name using the uule encoding scheme or libraries like serpwow-uule. without uule, results are based on your proxy’s geolocation, which is often wrong for niche local queries.
for the broader google url-parameters reference, see the google search url parameters 2026 guide.
extracting individual business details
the local pack listings give you basic data. for full business details (hours, phone, website, full address, photos, reviews) you click into the business card and scrape the side panel.
async def scrape_business_details(page, business_url):
await page.goto(business_url, wait_until="networkidle")
await page.wait_for_timeout(2000)
name = await page.locator("h1").first.text_content()
address = await page.locator("button[data-item-id='address']").first.text_content()
phone_el = page.locator("button[data-item-id^='phone']").first
phone = await phone_el.text_content() if await phone_el.count() else None
website_el = page.locator("a[data-item-id='authority']").first
website = await website_el.get_attribute("href") if await website_el.count() else None
rating = await page.locator("div.F7nice span[aria-hidden='true']").first.text_content()
return {
"name": name.strip() if name else None,
"address": address.strip() if address else None,
"phone": phone.strip() if phone else None,
"website": website,
"rating": rating,
}
the gbid (google business id, also called cid) is in the page url after navigation. extract from page.url with a regex on the place/.../@.../data= segment.
handling pagination and load more
google maps doesn’t paginate the local finder in the traditional sense. it loads more results as you scroll the left-side panel. the playwright code above scrolls 3 times. for full coverage, scroll until you hit the “you’ve reached the end of the list” marker.
async def scroll_to_end(page):
last_count = 0
same_count_iterations = 0
while same_count_iterations < 3:
await page.locator("div[role='feed']").evaluate("el => el.scrollBy(0, 1000)")
await page.wait_for_timeout(1500)
items = await page.locator("div[role='feed'] > div > div[jsaction]").count()
if items == last_count:
same_count_iterations += 1
else:
same_count_iterations = 0
last_count = items
return last_count
most categories cap at 120 results in google maps. some niche or local queries cap at 20-40. that’s a hard ceiling.
rate limiting and avoiding blocks
google’s anti-scraping is aggressive on maps. patterns that get you blocked fast:
- many queries from the same ip in quick succession
- queries with no realistic delay between them
- consistent user-agent across all requests
- queries from datacenter ips
- non-residential geolocation mismatch (querying us businesses from a singapore ip)
mitigations:
- residential or mobile proxies, rotated per query
- 5-10 second delay between queries minimum
- random user-agent across a pool of 10-20 valid ones
- match your proxy geo to your query geo where possible
- spread queries across hours, not in a 1-minute burst
with those mitigations in place, a single residential proxy can do 100-500 queries a day before getting flagged. with mobile proxies, several thousand. for higher volume, parallelize across many proxies.
cost comparison
estimating cost for 100,000 local pack queries.
| approach | cost | engineering effort |
|---|---|---|
| serpapi | $1000 (4x growth plan) | minimal |
| dataforseo | $60-150 | moderate (sdk integration) |
| brightdata serp api | $150 | minimal |
| diy with residential proxies | $50-100 (proxy bandwidth) | high (build + maintain) |
| diy with mobile proxies | $300-500 (mobile bandwidth) | high (build + maintain) |
dataforseo wins on raw cost at scale. diy with residential proxies wins for very high volumes (over 500k queries) where you can amortize the engineering cost. for under 100k queries, dataforseo is hard to beat.
faq
is scraping google maps legal?
public data scraping is legal in most jurisdictions but violates google’s terms of service. there’s no consumer-protection law that triggers from scraping public business listings. for commercial use cases at scale, talk to a lawyer about cfaa exposure. the web scraping legal guide covers the case law.
can i use the official google places api instead?
yes, and you should if your use case fits. places api charges $17-32 per 1000 requests and is rate-limited. for small volumes it’s competitive with serp apis. for high volume scraping is far cheaper.
how many results does google local pack actually return?
the visible 3-pack is just the top 3. the local finder (clicking “view all”) shows up to 120. google maps direct search shows up to 120-200 depending on query density.
does serpapi return the gbid?
yes, in the place_id field. some legacy responses use gbid directly. dataforseo also returns it. roll-your-own scraping requires extracting it from the place url.
which proxy type works best for google maps?
residential or mobile. datacenter ips get blocked within a few queries. mobile is more reliable for high-volume sustained scraping. for context see the residential proxy guide.
how do i scrape google reviews for a business?
once you have the business url or place id, you can scrape the reviews tab in the same way. each review is a single json item in the maps response. expect 200-500 reviews per page load with infinite scroll.
conclusion
google local pack scraping is a solved problem in 2026 if you’re willing to spend on a paid serp api. dataforseo at $0.001 per query is the price-to-value sweet spot. serpapi is the easiest first integration. brightdata is the most reliable at very high volume.
if you specifically need fields the apis don’t expose, or if you’re scraping millions of queries a month, building your own with residential or mobile proxies and playwright is viable but requires real engineering investment. the dom selectors break every few months. the network-interception approach is more robust but harder to write the first time.
start with a paid serp api. measure your data needs against what they return. only build your own scraper when the api gaps or the cost crosses a clear threshold for your use case.