Monitoring Port Congestion and Shipping Delays with Web Data
Port congestion can turn a well-planned supply chain into a logistical nightmare. When vessels queue for days waiting to berth, containers sit idle on terminal yards, and inland transport connections become overwhelmed, the ripple effects impact shipping costs, delivery timelines, and inventory planning. In Southeast Asia, where major ports like Singapore, Tanjung Priok, Laem Chabang, and Cat Lai handle enormous trade volumes, congestion events are frequent and their impacts are significant.
Real-time visibility into port congestion and shipping delays enables logistics teams to reroute shipments, adjust schedules, and proactively communicate with customers. This guide explains how to build a port congestion monitoring system using web data collection and proxy infrastructure.
Understanding Port Congestion
What Causes Port Congestion
Port congestion results from a mismatch between port capacity and demand. Common causes include:
Demand surges: Peak shipping seasons, pre-holiday inventory builds, and post-disruption recovery periods create volume spikes that exceed port handling capacity.
Vessel bunching: When multiple large vessels arrive simultaneously, often due to weather delays or schedule disruptions, ports cannot process them all at once.
Equipment shortages: Insufficient cranes, yard tractors, or chassis limit throughput even when berth space is available.
Labor issues: Strikes, COVID restrictions, or labor shortages reduce operational capacity.
Inland transport bottlenecks: Even when ports can unload vessels quickly, containers may accumulate if trucking or rail connections cannot clear them fast enough.
Weather events: Typhoons, monsoons, and other weather events in Southeast Asia regularly disrupt port operations.
Measuring Congestion
Key metrics for port congestion include:
- Vessel waiting time: Average time from arrival at anchorage to berthing
- Berth utilization: Percentage of berth slots occupied
- Yard density: Ratio of containers in yard to yard capacity
- Dwell time: Average time containers spend in the terminal before pickup
- Vessel queue length: Number of vessels waiting to berth
- Turnaround time: Total time from vessel arrival to departure
Data Sources for Port Congestion Monitoring
AIS (Automatic Identification System) Data
AIS transponders on vessels broadcast location, speed, heading, and vessel identity. Several platforms aggregate AIS data:
- MarineTraffic: The largest AIS data platform, showing real-time vessel positions and port traffic
- VesselFinder: Similar vessel tracking with historical position data
- FleetMon: Vessel tracking with port analytics features
- MyShipTracking: Provides vessel tracking and port activity data
These platforms allow you to monitor:
- How many vessels are at anchorage near a port (indicating congestion)
- Average time vessels spend at anchorage before berthing
- Vessel traffic trends over time
- Specific vessel positions and ETAs
Port Authority Websites
Port authorities publish operational data:
- PSA Singapore: Publishes container throughput statistics and terminal information
- IPC (Indonesia Port Corporation): Provides Tanjung Priok operational updates
- Laem Chabang Port: Publishes berth schedules and operational notices
- Cat Lai Terminal: Provides vessel schedules and terminal notifications
- Port Klang Authority: Publishes traffic statistics and operational information
Shipping Line Schedule Data
Carrier websites show schedule changes that indicate congestion:
- Blank sailings (cancelled vessel calls)
- Port rotation changes
- Schedule reliability metrics
- Revised ETAs for active voyages
News and Industry Sources
Industry publications and news sources report on congestion events:
- Splash247, The Loadstar, and Lloyd’s List for shipping news
- Port-specific social media channels
- Logistics industry forums and groups
Building a Port Congestion Monitoring System
System Architecture
Data Sources Proxy Layer Processing Output
(DataResearchTools)
AIS Platforms ---> Mobile Proxies ---> Congestion --> Dashboard
Port Authorities ---> (Country-specific) --> Calculator --> Alerts
Shipping Lines ---> ---> Trend Analyzer --> API
News Sources ---> ---> Predictor --> ReportsStep 1: Configure Proxy Access
Port monitoring requires accessing platforms from various geographic locations. DataResearchTools mobile proxies provide the necessary coverage:
class PortMonitorProxyConfig:
"""Configure proxies for port monitoring across SEA."""
# Map ports to countries for proxy selection
PORT_COUNTRIES = {
"SGSIN": "sg", # Singapore
"IDJKT": "id", # Tanjung Priok, Jakarta
"IDSBY": "id", # Tanjung Perak, Surabaya
"THLCH": "th", # Laem Chabang
"THBKK": "th", # Bangkok Port
"VNSGN": "vn", # Cat Lai, Ho Chi Minh City
"VNHPH": "vn", # Hai Phong
"PHMNL": "ph", # Manila
"PHCEB": "ph", # Cebu
"MYPKG": "my", # Port Klang
"MYPEN": "my", # Penang Port
}
def __init__(self, base_config):
self.base_config = base_config
def get_proxy_for_port(self, port_code):
"""Get appropriate proxy for monitoring a specific port."""
country = self.PORT_COUNTRIES.get(port_code, "sg")
return self.base_config.get_proxy(country)Step 2: Collect AIS-Based Congestion Data
Monitor vessel queues at ports using AIS data platforms:
class AISCongestionCollector:
"""Collect port congestion data from AIS platforms."""
def __init__(self, proxy_config):
self.proxy_config = proxy_config
def count_vessels_at_port(self, port_code):
"""Count vessels at berth, at anchorage, and approaching."""
proxy = self.proxy_config.get_proxy_for_port(port_code)
session = requests.Session()
session.proxies = proxy
session.headers.update({
"User-Agent": (
"Mozilla/5.0 (Linux; Android 14; Pixel 8) "
"AppleWebKit/537.36 Chrome/121.0.0.0 Mobile Safari/537.36"
),
})
try:
# Query AIS platform API for vessels near the port
response = session.get(
"https://ais-platform.com/api/port-vessels",
params={"port": port_code, "radius_nm": 20},
timeout=30,
)
if response.status_code == 200:
data = response.json()
return self._categorize_vessels(data, port_code)
except Exception as e:
print(f"Error collecting AIS data for {port_code}: {e}")
return None
def _categorize_vessels(self, vessel_data, port_code):
"""Categorize vessels by their position relative to port."""
result = {
"port": port_code,
"at_berth": 0,
"at_anchorage": 0,
"approaching": 0,
"departing": 0,
"vessel_details": [],
"collected_at": datetime.utcnow().isoformat(),
}
for vessel in vessel_data.get("vessels", []):
status = vessel.get("navigation_status", "")
speed = vessel.get("speed_knots", 0)
if status in ("moored", "at_berth"):
result["at_berth"] += 1
elif status == "at_anchor" or speed < 0.5:
result["at_anchorage"] += 1
elif speed > 0.5:
# Determine if approaching or departing
# based on heading relative to port
if self._is_approaching(vessel, port_code):
result["approaching"] += 1
else:
result["departing"] += 1
result["vessel_details"].append({
"vessel_name": vessel.get("name"),
"imo": vessel.get("imo"),
"vessel_type": vessel.get("type"),
"status": status,
"speed": speed,
"eta": vessel.get("eta"),
})
return result
def _is_approaching(self, vessel, port_code):
"""Determine if a vessel is approaching or departing the port."""
# Simplified: check if vessel has ETA for this port
return vessel.get("destination", "").upper() == port_codeStep 3: Monitor Port Authority Updates
Collect operational updates from port authority websites:
class PortAuthorityMonitor:
"""Monitor port authority websites for operational updates."""
def __init__(self, proxy_config):
self.proxy_config = proxy_config
def check_berth_schedule(self, port_code):
"""Check current berth occupancy and upcoming vessels."""
proxy = self.proxy_config.get_proxy_for_port(port_code)
session = requests.Session()
session.proxies = proxy
# Port authority websites vary significantly
# This is a generalized approach
try:
response = session.get(
self._get_port_url(port_code, "berth_schedule"),
timeout=30,
)
if response.status_code == 200:
return self._parse_berth_data(response.text, port_code)
except Exception as e:
print(f"Error checking berth schedule for {port_code}: {e}")
return None
def check_operational_notices(self, port_code):
"""Check for operational notices that may indicate congestion."""
proxy = self.proxy_config.get_proxy_for_port(port_code)
session = requests.Session()
session.proxies = proxy
try:
response = session.get(
self._get_port_url(port_code, "notices"),
timeout=30,
)
if response.status_code == 200:
return self._parse_notices(response.text, port_code)
except Exception as e:
print(f"Error checking notices for {port_code}: {e}")
return None
def _get_port_url(self, port_code, data_type):
"""Get the URL for specific data from a port authority."""
# Map port codes to their authority URLs
port_urls = {
"SGSIN": {
"berth_schedule": "https://www.psa.com.sg/schedules",
"notices": "https://www.psa.com.sg/notices",
},
"THLCH": {
"berth_schedule": "https://laemchabangport.com/berth",
"notices": "https://laemchabangport.com/notices",
},
# Additional ports...
}
return port_urls.get(port_code, {}).get(data_type, "")Step 4: Calculate Congestion Index
Create a composite congestion index from multiple data sources:
class CongestionIndexCalculator:
"""Calculate a composite port congestion index."""
def calculate_index(self, port_data):
"""
Calculate congestion index on a 0-100 scale.
0 = no congestion, 100 = severe congestion.
"""
scores = []
# Vessel queue score (0-100)
anchorage_count = port_data.get("at_anchorage", 0)
queue_score = min(anchorage_count * 5, 100)
scores.append(("vessel_queue", queue_score, 0.30))
# Berth utilization score (0-100)
berth_util = port_data.get("berth_utilization_pct", 50)
berth_score = min(berth_util * 1.2, 100)
scores.append(("berth_util", berth_score, 0.25))
# Average waiting time score (0-100)
avg_wait_hours = port_data.get("avg_waiting_hours", 0)
wait_score = min(avg_wait_hours * 2, 100)
scores.append(("wait_time", wait_score, 0.25))
# Yard density score (0-100)
yard_density = port_data.get("yard_density_pct", 50)
yard_score = min(yard_density * 1.3, 100)
scores.append(("yard_density", yard_score, 0.20))
# Weighted average
congestion_index = sum(
score * weight for _, score, weight in scores
)
return {
"port": port_data.get("port"),
"congestion_index": round(congestion_index, 1),
"severity": self._get_severity(congestion_index),
"component_scores": {
name: round(score, 1) for name, score, _ in scores
},
"calculated_at": datetime.utcnow().isoformat(),
}
def _get_severity(self, index):
"""Map congestion index to severity level."""
if index < 20:
return "LOW"
elif index < 40:
return "MODERATE"
elif index < 60:
return "HIGH"
elif index < 80:
return "SEVERE"
else:
return "CRITICAL"Step 5: Set Up Alerts
Configure alerts for congestion changes:
class CongestionAlertSystem:
"""Alert when port congestion changes significantly."""
def __init__(self, notification_service):
self.notifier = notification_service
self.thresholds = {
"increase_threshold": 15, # Alert if index rises by 15+ points
"decrease_threshold": -15, # Alert if index drops by 15+ points
"absolute_high": 60, # Alert if index exceeds 60
"absolute_critical": 80, # Urgent alert if index exceeds 80
}
def evaluate_and_alert(self, current_index, previous_index):
"""Compare current congestion to previous and generate alerts."""
alerts = []
change = current_index["congestion_index"] - previous_index["congestion_index"]
if change >= self.thresholds["increase_threshold"]:
alerts.append({
"type": "CONGESTION_INCREASING",
"port": current_index["port"],
"current_index": current_index["congestion_index"],
"previous_index": previous_index["congestion_index"],
"change": round(change, 1),
"severity": current_index["severity"],
"priority": "HIGH",
})
if current_index["congestion_index"] >= self.thresholds["absolute_critical"]:
alerts.append({
"type": "CRITICAL_CONGESTION",
"port": current_index["port"],
"current_index": current_index["congestion_index"],
"severity": "CRITICAL",
"priority": "URGENT",
})
if alerts:
for alert in alerts:
self.notifier.send(alert)
return alertsPractical Applications
Shipment Rerouting Decisions
When congestion is detected at a destination port, evaluate alternatives:
def evaluate_alternative_ports(congestion_data, shipment):
"""Evaluate alternative ports when primary port is congested."""
primary_port = shipment["destination_port"]
primary_congestion = congestion_data.get(primary_port, {})
if primary_congestion.get("congestion_index", 0) < 40:
return {"recommendation": "PROCEED", "port": primary_port}
# Check alternative ports
alternatives = {
"SGSIN": ["MYPKG", "MYPEN"], # Singapore alternatives
"IDJKT": ["IDSBY", "IDSRG"], # Jakarta alternatives
"THLCH": ["THBKK"], # Laem Chabang alternatives
"VNSGN": ["VNHPH"], # Cat Lai alternatives
}
options = []
for alt_port in alternatives.get(primary_port, []):
alt_congestion = congestion_data.get(alt_port, {})
alt_index = alt_congestion.get("congestion_index", 50)
options.append({
"port": alt_port,
"congestion_index": alt_index,
"estimated_delay_days": alt_index / 20,
"additional_inland_cost": (
_estimate_inland_cost(alt_port, shipment["final_destination"])
),
})
options.sort(key=lambda x: x["congestion_index"])
if options and options[0]["congestion_index"] < primary_congestion.get("congestion_index", 100) - 20:
return {
"recommendation": "REROUTE",
"current_port": primary_port,
"suggested_port": options[0]["port"],
"congestion_improvement": (
primary_congestion.get("congestion_index", 0) - options[0]["congestion_index"]
),
}
return {"recommendation": "PROCEED_WITH_CAUTION", "port": primary_port}Customer Communication
Use congestion data to proactively inform customers about potential delays:
def generate_delay_notification(congestion_data, affected_shipments):
"""Generate customer notifications for congestion-related delays."""
notifications = []
for shipment in affected_shipments:
port = shipment["destination_port"]
congestion = congestion_data.get(port, {})
if congestion.get("congestion_index", 0) > 50:
estimated_delay = max(0, congestion["congestion_index"] / 20 - 1)
original_eta = shipment["original_eta"]
revised_eta = original_eta + timedelta(days=estimated_delay)
notifications.append({
"shipment_id": shipment["id"],
"customer": shipment["customer"],
"message": (
f"Your shipment {shipment['id']} may experience "
f"a delay of {int(estimated_delay)} day(s) due to "
f"port congestion at {port}. "
f"Revised ETA: {revised_eta.strftime('%Y-%m-%d')}"
),
"original_eta": original_eta.isoformat(),
"revised_eta": revised_eta.isoformat(),
})
return notificationsDataResearchTools for Port Monitoring
DataResearchTools mobile proxies provide specific advantages for port congestion monitoring:
- Local AIS platform access: Many AIS data providers serve more detailed data to local users. Mobile proxies from DataResearchTools ensure access to full local content.
- Port authority compatibility: Government port authority websites are reliably accessible through genuine mobile connections
- Multi-port coverage: Monitor ports across all major SEA countries from a single proxy provider
- Continuous monitoring: Reliable mobile IPs enable round-the-clock monitoring without access interruptions
Conclusion
Port congestion monitoring is a critical capability for logistics companies operating in Southeast Asia. By collecting and analyzing data from AIS platforms, port authorities, and shipping lines, logistics teams can detect congestion early, make informed rerouting decisions, and communicate proactively with customers.
DataResearchTools mobile proxies provide the reliable, geographically targeted access needed to collect port data across the region. Start monitoring the ports most relevant to your supply chain, build out your congestion index calculations, and integrate alerts into your operational workflows. The ability to detect and respond to port congestion hours or days ahead of competitors is a significant operational advantage.
- Building a Delivery SLA Monitoring System with Proxies
- Building a Freight Rate Comparison Engine with Proxy Infrastructure
- How to Scrape AliExpress Product Data Without Getting Blocked
- Amazon Buy Box Monitoring: Proxy Setup for Continuous Tracking
- How Anti-Bot Systems Detect Scrapers (Cloudflare, Akamai, PerimeterX)
- API vs Web Scraping: When You Need Proxies (and When You Don’t)
- Best Proxies for Logistics and Supply Chain Data Collection
- Building a Delivery SLA Monitoring System with Proxies
- aiohttp + BeautifulSoup: Async Python Scraping
- How to Scrape AliExpress Product Data Without Getting Blocked
- Amazon Buy Box Monitoring: Proxy Setup for Continuous Tracking
- How Anti-Bot Systems Detect Scrapers (Cloudflare, Akamai, PerimeterX)
- Best Proxies for Logistics and Supply Chain Data Collection
- Building a Delivery SLA Monitoring System with Proxies
- aiohttp + BeautifulSoup: Async Python Scraping
- How to Scrape AliExpress Product Data Without Getting Blocked
- Amazon Buy Box Monitoring: Proxy Setup for Continuous Tracking
- How Anti-Bot Systems Detect Scrapers (Cloudflare, Akamai, PerimeterX)
- Best Proxies for Logistics and Supply Chain Data Collection
- Building a Delivery SLA Monitoring System with Proxies
- aiohttp + BeautifulSoup: Async Python Scraping
- How to Scrape AliExpress Product Data Without Getting Blocked
- Amazon Buy Box Monitoring: Proxy Setup for Continuous Tracking
- How Anti-Bot Systems Detect Scrapers (Cloudflare, Akamai, PerimeterX)
Related Reading
- Best Proxies for Logistics and Supply Chain Data Collection
- Building a Delivery SLA Monitoring System with Proxies
- aiohttp + BeautifulSoup: Async Python Scraping
- How to Scrape AliExpress Product Data Without Getting Blocked
- Amazon Buy Box Monitoring: Proxy Setup for Continuous Tracking
- How Anti-Bot Systems Detect Scrapers (Cloudflare, Akamai, PerimeterX)
last updated: April 3, 2026