How to Monitor Competitor Pricing Pages for Sales Intelligence
Understanding how competitors price their products gives your sales team a decisive advantage. When a prospect asks, “Why should I choose you over [Competitor]?” your team needs current, accurate data on competitor pricing, packaging, and feature comparisons. Manual monitoring is impractical when you track dozens of competitors across multiple product lines.
Automated pricing page monitoring with mobile proxies gives you real-time competitive intelligence. This guide covers building a system that tracks pricing changes, detects feature shifts, and delivers actionable insights to your sales team.
Why Proxy-Based Monitoring Is Necessary
Competitors actively prevent automated access to their pricing pages for several reasons:
- Geo-restricted pricing — Many SaaS companies show different prices by country.
- Dynamic pricing — Prices may vary based on visitor behavior, company size, or detected intent.
- Anti-scraping protection — Cloudflare, Akamai, and custom WAFs block datacenter IPs.
- JavaScript rendering — Interactive pricing calculators require full browser execution.
- Login walls — Some pricing details are only visible after account creation.
Mobile proxies bypass IP-based restrictions while enabling geo-specific pricing capture.
Building the Pricing Monitor
Architecture
[Competitor List] → [Scheduled Scraper] → [Change Detection] → [Alert System] → [Dashboard]Competitor Configuration
from dataclasses import dataclass, field
from typing import List, Dict, Optional
@dataclass
class CompetitorConfig:
name: str
pricing_url: str
pricing_type: str # "static", "dynamic", "calculator", "login_required"
selectors: Dict[str, str] = field(default_factory=dict)
geo_variants: List[str] = field(default_factory=list)
check_frequency_hours: int = 24
COMPETITORS = [
CompetitorConfig(
name="Competitor A",
pricing_url="https://competitor-a.com/pricing",
pricing_type="static",
selectors={
"plans": '[class*="pricing-card"]',
"plan_name": 'h3',
"price": '[class*="price"]',
"features": '[class*="feature-list"] li',
"cta": '[class*="cta-button"]',
},
geo_variants=["US", "UK", "DE", "JP"],
check_frequency_hours=12,
),
CompetitorConfig(
name="Competitor B",
pricing_url="https://competitor-b.com/plans",
pricing_type="calculator",
selectors={
"calculator_input": '[class*="slider"]',
"price_output": '[class*="total-price"]',
},
check_frequency_hours=24,
),
]Core Scraping Engine
from playwright.async_api import async_playwright
import asyncio
import json
import hashlib
from datetime import datetime
import random
class PricingMonitor:
"""Monitor competitor pricing pages for changes"""
def __init__(self, competitors, proxy_pool, storage):
self.competitors = competitors
self.proxy_pool = proxy_pool
self.storage = storage
async def check_all_competitors(self):
"""Run pricing check across all competitors"""
results = []
for competitor in self.competitors:
proxy = self.proxy_pool.get_proxy(geo="US")
result = await self.check_competitor(competitor, proxy)
results.append(result)
await asyncio.sleep(random.uniform(5, 15))
return results
async def check_competitor(self, config, proxy_config):
"""Check a single competitor's pricing"""
async with async_playwright() as p:
browser = await p.chromium.launch(
proxy=proxy_config,
headless=True,
)
context = await browser.new_context(
viewport={"width": 1920, "height": 1080},
locale="en-US",
)
page = await context.new_page()
try:
await page.goto(config.pricing_url, wait_until="networkidle", timeout=30000)
await page.wait_for_timeout(random.randint(3000, 6000))
if config.pricing_type == "static":
pricing_data = await self.extract_static_pricing(page, config)
elif config.pricing_type == "calculator":
pricing_data = await self.extract_calculator_pricing(page, config)
elif config.pricing_type == "login_required":
pricing_data = await self.extract_authenticated_pricing(page, config)
else:
pricing_data = await self.extract_static_pricing(page, config)
# Take screenshot for visual comparison
screenshot = await page.screenshot(full_page=True)
# Detect changes
change_result = self.detect_changes(config.name, pricing_data)
result = {
"competitor": config.name,
"timestamp": datetime.utcnow().isoformat(),
"pricing_data": pricing_data,
"changes_detected": change_result["changed"],
"change_details": change_result.get("details", []),
"screenshot": screenshot,
}
# Store current data
self.storage.save_pricing(config.name, pricing_data)
return result
except Exception as e:
return {
"competitor": config.name,
"error": str(e),
"timestamp": datetime.utcnow().isoformat(),
}
finally:
await browser.close()
async def extract_static_pricing(self, page, config):
"""Extract pricing from static pricing page"""
plans = []
plan_cards = await page.query_selector_all(config.selectors.get("plans", ".pricing-card"))
for card in plan_cards:
plan = {}
name_el = await card.query_selector(config.selectors.get("plan_name", "h3"))
if name_el:
plan['name'] = (await name_el.inner_text()).strip()
price_el = await card.query_selector(config.selectors.get("price", ".price"))
if price_el:
price_text = (await price_el.inner_text()).strip()
plan['price_text'] = price_text
plan['price_numeric'] = self.parse_price(price_text)
feature_els = await card.query_selector_all(
config.selectors.get("features", "li")
)
plan['features'] = []
for feat in feature_els:
plan['features'].append((await feat.inner_text()).strip())
cta_el = await card.query_selector(config.selectors.get("cta", "button"))
if cta_el:
plan['cta_text'] = (await cta_el.inner_text()).strip()
if plan.get('name'):
plans.append(plan)
return {"plans": plans, "page_title": await page.title()}
async def extract_calculator_pricing(self, page, config):
"""Extract pricing from calculator/slider-based pricing"""
results = []
# Test multiple quantity levels
test_quantities = [100, 500, 1000, 5000, 10000, 50000]
for qty in test_quantities:
slider = await page.query_selector(
config.selectors.get("calculator_input", "input[type='range']")
)
if slider:
await slider.fill(str(qty))
await page.wait_for_timeout(1000)
price_el = await page.query_selector(
config.selectors.get("price_output", ".total-price")
)
if price_el:
price_text = (await price_el.inner_text()).strip()
results.append({
"quantity": qty,
"price_text": price_text,
"price_numeric": self.parse_price(price_text),
})
return {"calculator_results": results}
@staticmethod
def parse_price(price_text):
"""Extract numeric price from text"""
import re
match = re.search(r'[\$\£\€]?\s*([\d,]+\.?\d*)', price_text)
if match:
return float(match.group(1).replace(',', ''))
return NoneChange Detection
def detect_changes(self, competitor_name, current_data):
"""Compare current pricing with stored historical data"""
previous_data = self.storage.get_latest_pricing(competitor_name)
if not previous_data:
return {"changed": False, "details": ["First data capture"]}
changes = []
current_hash = hashlib.md5(json.dumps(current_data, sort_keys=True).encode()).hexdigest()
previous_hash = hashlib.md5(json.dumps(previous_data, sort_keys=True).encode()).hexdigest()
if current_hash == previous_hash:
return {"changed": False}
# Detailed change analysis
if "plans" in current_data and "plans" in previous_data:
current_plans = {p["name"]: p for p in current_data["plans"]}
previous_plans = {p["name"]: p for p in previous_data["plans"]}
# New plans added
for name in current_plans:
if name not in previous_plans:
changes.append({
"type": "plan_added",
"plan": name,
"details": current_plans[name],
})
# Plans removed
for name in previous_plans:
if name not in current_plans:
changes.append({
"type": "plan_removed",
"plan": name,
})
# Price changes
for name in current_plans:
if name in previous_plans:
curr_price = current_plans[name].get("price_numeric")
prev_price = previous_plans[name].get("price_numeric")
if curr_price and prev_price and curr_price != prev_price:
pct_change = ((curr_price - prev_price) / prev_price) * 100
changes.append({
"type": "price_change",
"plan": name,
"previous": prev_price,
"current": curr_price,
"percent_change": round(pct_change, 1),
})
# Feature changes
curr_features = set(current_plans[name].get("features", []))
prev_features = set(previous_plans[name].get("features", []))
added_features = curr_features - prev_features
removed_features = prev_features - curr_features
if added_features:
changes.append({
"type": "features_added",
"plan": name,
"features": list(added_features),
})
if removed_features:
changes.append({
"type": "features_removed",
"plan": name,
"features": list(removed_features),
})
return {"changed": len(changes) > 0, "details": changes}Geo-Specific Pricing Capture
Many competitors show different prices by region. Use geo-targeted proxies to capture regional variations. For proxy terminology and concepts, see our proxy glossary.
async def capture_geo_pricing(config, proxy_pool):
"""Capture pricing across multiple geographies"""
geo_pricing = {}
regions = {
"US": {"locale": "en-US", "timezone": "America/New_York", "currency": "USD"},
"UK": {"locale": "en-GB", "timezone": "Europe/London", "currency": "GBP"},
"DE": {"locale": "de-DE", "timezone": "Europe/Berlin", "currency": "EUR"},
"JP": {"locale": "ja-JP", "timezone": "Asia/Tokyo", "currency": "JPY"},
"AU": {"locale": "en-AU", "timezone": "Australia/Sydney", "currency": "AUD"},
}
for region_code, region_config in regions.items():
if region_code not in config.geo_variants:
continue
proxy = proxy_pool.get_proxy(geo=region_code)
async with async_playwright() as p:
browser = await p.chromium.launch(proxy=proxy)
context = await browser.new_context(
locale=region_config["locale"],
timezone_id=region_config["timezone"],
)
page = await context.new_page()
await page.goto(config.pricing_url, wait_until="networkidle")
await page.wait_for_timeout(random.randint(3000, 6000))
# Extract pricing for this region
pricing = await PricingMonitor.extract_static_pricing(None, page, config)
geo_pricing[region_code] = {
"currency": region_config["currency"],
"pricing": pricing,
}
await browser.close()
await asyncio.sleep(random.uniform(10, 30))
return geo_pricingAlerting and Reporting
Slack Integration
import requests
def send_pricing_alert(changes, webhook_url):
"""Send pricing change alert to Slack"""
blocks = [
{
"type": "header",
"text": {"type": "plain_text", "text": "Competitor Pricing Change Detected"}
}
]
for change in changes:
if change["type"] == "price_change":
emoji = ":chart_with_upwards_trend:" if change["percent_change"] > 0 else ":chart_with_downwards_trend:"
text = f"{emoji} *{change['plan']}*: ${change['previous']} -> ${change['current']} ({change['percent_change']:+.1f}%)"
elif change["type"] == "plan_added":
text = f":new: New plan added: *{change['plan']}*"
elif change["type"] == "plan_removed":
text = f":x: Plan removed: *{change['plan']}*"
elif change["type"] == "features_added":
text = f":heavy_plus_sign: Features added to *{change['plan']}*: {', '.join(change['features'])}"
else:
text = f"Change detected: {json.dumps(change)}"
blocks.append({
"type": "section",
"text": {"type": "mrkdwn", "text": text}
})
requests.post(webhook_url, json={"blocks": blocks})Battle Card Generation
Convert pricing intelligence into sales-ready battle cards:
def generate_battle_card(competitor_name, pricing_data, our_pricing):
"""Generate a competitive battle card"""
card = {
"competitor": competitor_name,
"generated_at": datetime.utcnow().isoformat(),
"comparisons": [],
}
for their_plan in pricing_data.get("plans", []):
# Find our closest matching plan
our_match = find_closest_plan(their_plan, our_pricing)
if our_match:
comparison = {
"their_plan": their_plan["name"],
"their_price": their_plan.get("price_numeric"),
"our_plan": our_match["name"],
"our_price": our_match.get("price_numeric"),
"our_advantages": find_our_advantages(our_match, their_plan),
"their_advantages": find_their_advantages(our_match, their_plan),
"suggested_talk_track": generate_talk_track(our_match, their_plan),
}
card["comparisons"].append(comparison)
return cardIntegrating with Sales Workflows
Push competitive intelligence into tools your sales team already uses. Teams that also use proxies for web scraping can feed competitive data alongside lead data into unified dashboards.
def push_to_salesforce(battle_card, sf_client):
"""Update Salesforce competitive intel fields"""
competitor_record = sf_client.query(
f"SELECT Id FROM Competitor__c WHERE Name = '{battle_card['competitor']}'"
)
if competitor_record['records']:
sf_client.Competitor__c.update(
competitor_record['records'][0]['Id'],
{
'Latest_Pricing__c': json.dumps(battle_card['comparisons']),
'Last_Checked__c': battle_card['generated_at'],
'Price_Change_Alert__c': True,
}
)Conclusion
Automated competitor pricing monitoring transforms your sales team from reactive to proactive. Instead of discovering pricing changes from prospects during negotiations, your team knows about changes within hours and can adjust positioning accordingly. Mobile proxies ensure reliable access to competitor websites across geographies, while change detection algorithms surface only the insights that matter. Build this system once, and it delivers continuous competitive intelligence with minimal maintenance.
- How to Build an Automated Lead Scraping Pipeline with Proxies
- Building a B2B Contact Enrichment Pipeline with Mobile Proxies
- How to Scrape Job Listings at Scale with Rotating Proxies
- Proxies for HR Tech: Salary Benchmarking & Talent Intelligence
- How to Scrape AliExpress Product Data Without Getting Blocked
- Amazon Buy Box Monitoring: Proxy Setup for Continuous Tracking
- How to Build an Automated Lead Scraping Pipeline with Proxies
- Building a B2B Contact Enrichment Pipeline with Mobile Proxies
- How to Scrape Job Listings at Scale with Rotating Proxies
- Proxies for HR Tech: Salary Benchmarking & Talent Intelligence
- aiohttp + BeautifulSoup: Async Python Scraping
- How to Scrape AliExpress Product Data Without Getting Blocked
- How to Build an Automated Lead Scraping Pipeline with Proxies
- Building a B2B Contact Enrichment Pipeline with Mobile Proxies
- How to Scrape Job Listings at Scale with Rotating Proxies
- Proxies for HR Tech: Salary Benchmarking & Talent Intelligence
- aiohttp + BeautifulSoup: Async Python Scraping
- How to Scrape AliExpress Product Data Without Getting Blocked
- How to Build an Automated Lead Scraping Pipeline with Proxies
- Building a B2B Contact Enrichment Pipeline with Mobile Proxies
- How to Scrape Job Listings at Scale with Rotating Proxies
- Proxies for HR Tech: Salary Benchmarking & Talent Intelligence
- aiohttp + BeautifulSoup: Async Python Scraping
- How to Scrape AliExpress Product Data Without Getting Blocked
Related Reading
- How to Build an Automated Lead Scraping Pipeline with Proxies
- Building a B2B Contact Enrichment Pipeline with Mobile Proxies
- How to Scrape Job Listings at Scale with Rotating Proxies
- Proxies for HR Tech: Salary Benchmarking & Talent Intelligence
- aiohttp + BeautifulSoup: Async Python Scraping
- How to Scrape AliExpress Product Data Without Getting Blocked