Best Proxies for Price Monitoring 2026: Tools, Setup, Anti-Block Tips
residential rotating proxies are the best choice for price monitoring across most retailers in 2026. ISP proxies work well when you need stable session IPs for cart-flow scraping, and mobile is overkill for static price pages. avoid datacenter for any retailer running modern bot protection (Akamai, Imperva, DataDome, PerimeterX), which is now most of them.
this guide ranks the best providers by retailer compatibility, walks through the setup, and covers the anti-block tactics that actually work.
why price monitoring is harder than it looks
price scraping triggers bot detection harder than most use cases because:
- prices update frequently, so you need to recrawl often
- e-commerce sites invest heavily in WAF/anti-bot to protect dynamic pricing strategies
- catalogs are huge (millions of SKUs at Amazon, Walmart, Booking.com)
- many sites geo-vary prices, so you need IPs in the target market
- A/B tests and personalization mean two requests can return different prices
this is why proxy choice matters more here than for, say, a one-off competitive analysis.
quick picks by retailer category
| category | example sites | best proxy type |
|---|---|---|
| general e-commerce | Amazon, Walmart, Best Buy | residential rotating |
| travel and hospitality | Booking, Expedia, Airbnb | residential, geo-precise |
| airlines and OTAs | Kayak, Skyscanner | mobile or residential, sticky |
| marketplaces | eBay, AliExpress | residential rotating |
| local retailers | Target, Tesco, regional grocers | residential, country-specific |
| sportsbooks | Bet365, DraftKings | residential, geo-locked, sticky |
| luxury and DTC | Net-a-Porter, brand sites | residential or ISP, low rotation |
ranked: best providers for price monitoring 2026
1. Bright Data (best for hard targets)
100M+ residential IPs, granular geo-targeting (country, state, city, ISP, ASN), and the deepest geo coverage in the industry. essential for sites like Booking.com or major airlines that geo-vary aggressively.
pricing $8-15/GB at low volumes, dropping to $4-6/GB at high volume. expensive but consistently bypasses Akamai, Imperva, and PerimeterX where cheaper providers struggle.
2. Oxylabs (close second)
similar IP pool size to Bright Data, comparable success rate on tough targets, slightly different geo coverage. their Web Scraper API for E-commerce is purpose-built for retailer scraping with built-in unblocking. $8-12/GB.
3. SmartProxy (best price/performance)
55M residential IPs, $7/GB at entry tier, $2.50/GB at high volume. solid for general retailers and marketplaces. occasional struggles with the most hardened travel sites but covers 80% of price monitoring needs.
4. SOAX (good for niche geos)
residential and mobile pools with strong coverage in emerging markets (Southeast Asia, Latin America). useful when you need pricing data from regions Bright Data treats as second-tier. $9/GB residential.
5. IPRoyal (budget option)
cheaper residential ($1.75-3/GB) with smaller pool. works for soft targets like generic e-commerce. expect more retries on hard targets.
6. NetNut (ISP specialist)
ISP proxy specialist. fast and stable for session-based scraping (cart flows, multi-step price discovery). pricing $5-10/GB. less suitable for high-rotation random sampling.
7. Singapore Mobile Proxy (APAC + sticky)
dedicated mobile IPs on real Singapore carriers. ideal for sticky-session price scraping in APAC where you need to maintain login or geo-token cookies across many requests on the same IP. pricing in SGD, monthly per-IP.
we maintain a full provider comparison in our best proxy providers 2026 ultimate comparison guide.
sticky session vs rotating: the big choice
sticky session (one IP per user-session): use for cart flows, login-required pricing, or sites that fingerprint based on session continuity. the same IP for 5-30 minutes lets you complete a multi-step flow.
rotating (new IP per request): use for static price-page scraping at scale. each request fresh, no state carried, lower detection risk per request.
most price monitoring jobs are rotating. sticky is for the harder cases (Booking dates spread across multiple requests, airline searches that need pricing context).
geo-targeting: do not skip this
retailer prices vary by country, currency, and even city for some categories. always match the proxy IP geo to the target market.
# US pricing
PROXY_US = "http://user-country-us:pass@gate.smartproxy.com:7000"
# UK pricing
PROXY_UK = "http://user-country-gb:pass@gate.smartproxy.com:7000"
# Germany pricing
PROXY_DE = "http://user-country-de:pass@gate.smartproxy.com:7000"
for travel/hotel sites, city-level matters: a London IP and a Manchester IP can return different deals. Bright Data and Oxylabs offer city-level targeting.
complete setup with Python
import asyncio
import random
import httpx
from bs4 import BeautifulSoup
PROXIES = {
"us": "http://user-country-us-session-{sid}:pass@gate.smartproxy.com:7000",
"uk": "http://user-country-gb-session-{sid}:pass@gate.smartproxy.com:7000",
}
USER_AGENTS = [
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 Chrome/132.0.0.0 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 14_0) AppleWebKit/605.1.15 Version/17.0 Safari/605.1.15",
]
async def fetch_price(url, country="us", session_id=None):
sid = session_id or random.randint(1000, 999999)
proxy = PROXIES[country].format(sid=sid)
headers = {
"User-Agent": random.choice(USER_AGENTS),
"Accept-Language": "en-US,en;q=0.9" if country == "us" else "en-GB,en;q=0.9",
}
async with httpx.AsyncClient(proxy=proxy, timeout=20, follow_redirects=True) as c:
r = await c.get(url, headers=headers)
soup = BeautifulSoup(r.text, "lxml")
price_el = soup.select_one("[itemprop='price'], .price, [data-price]")
return {
"url": url,
"status": r.status_code,
"price": price_el.get("content") or price_el.text.strip() if price_el else None,
}
# usage
async def main():
urls = ["https://example-shop.com/sku/1234"]
tasks = [fetch_price(u, country="us") for u in urls]
results = await asyncio.gather(*tasks)
print(results)
asyncio.run(main())
key points:
Accept-Languageshould match the geofollow_redirects=Truehandles regional redirects- session id in the proxy username gives sticky-IP behavior when needed; omit for rotation per request
for the full architecture see our proxies for price monitoring complete setup guide.
anti-block tactics that work
rotate user-agents alongside IPs. matching a Mac UA with a Windows IP fingerprint is a giveaway.
throttle per domain, not just globally. one request per 1-3 seconds per domain gives realistic human pacing.
mix in non-product page visits. a real user lands on the homepage, browses categories, then views products. scrapers that hit /product/123 a thousand times in a row stand out.
handle cart and login flows in a real browser. if you need authenticated prices, use Playwright with playwright-stealth, save cookies, then reuse them for unauth requests.
check for honeypot prices. some retailers serve fake high prices to flagged IPs to corrupt scraper data without alerting them. validate against a manual spot-check from a clean residential IP weekly.
watch for soft blocks. status 200 with a CAPTCHA page or a “you appear to be a bot” message. parse the response, not just the HTTP status.
scraping at scale: the architecture
for a 10M-page-per-day price monitoring system:
- queue layer: Redis or Kafka holds URL queue
- fetcher workers: Python
aiohttporhttpxasync, 100-500 concurrent per worker, residential proxy rotator - render workers: Playwright cluster for the 5-10% of pages that need JS, kept separate from the fetcher fleet for cost control
- parser: BeautifulSoup or lxml in a thread pool
- storage: Postgres for current state, BigQuery or ClickHouse for historical
- scheduler: Airflow or Temporal for retry and dependency management
you can build this for $3-8K/month at 10M pages/day with residential proxies. for hardest targets (airlines, booking) the cost can double.
what to avoid
- free public proxies. tested in 2026: 95%+ failure rate on Amazon, Walmart, Booking
- VPNs marketed as scraping solutions. shared IPs are heavily flagged
- datacenter proxies for Akamai-protected sites. instant block
- one-shop-fits-all approaches. mix proxy types per target if you have many targets
related: betting odds and bookmakers
if you scrape sportsbook prices (which is technically odds monitoring, structurally similar to price monitoring), the requirements are tighter because of geo-licensing. our bookmaker odds scraping guide covers the differences.
faq
how many proxies do I need to monitor 100,000 SKUs daily?
depends on update frequency. once-daily refresh of 100k SKUs is ~200GB of bandwidth at 2MB per page average. one residential rotating endpoint handles this easily; budget $400-800/month at SmartProxy or similar.
do I need a different proxy for each retailer?
no, one residential rotating endpoint covers most retailers. you do need different geo configs (US for amazon.com, UK for amazon.co.uk).
how often should I rotate IPs?
for static price scraping, rotate every request. for cart flow or session-required pages, sticky 5-15 minutes. let the use case drive the choice.
will retailers sue me?
historically rare for public price data. hiQ Labs v LinkedIn established that scraping public data is generally not a CFAA violation in the US. ToS violations are a contract issue, not criminal. consult a lawyer for high-volume commercial use. our web scraping legal guide covers this in detail.
can I use the same proxies for monitoring competitors’ ads?
yes, but ad systems (Google Ads transparency, Meta Ad Library) often block aggressively. for ad data, mobile or residential with extended sessions works best.
how do I handle dynamic prices that change mid-scrape?
track timestamps with each price snapshot. compare deltas in your analytics, not in your scraper. accept that prices are a sample at a point in time, not ground truth.
conclusion
residential rotating proxies are the workhorse for price monitoring in 2026. Bright Data and Oxylabs for the hardest targets, SmartProxy or SOAX for the rest, ISP for sticky-session needs, and dedicated mobile only when nothing else works.
start with one residential endpoint with country-level geo-targeting, build the basic pipeline, and only add complexity (multi-provider, mobile, ISP) when specific targets demand it. most price monitoring projects do not need exotic proxy stacks – they need solid execution on the basics.