How to Track PhilGEPS Tenders and Contract Awards

How to Track PhilGEPS Tenders and Contract Awards

The Philippine Government Electronic Procurement System (PhilGEPS) is the centralized platform for all government procurement activities in the Philippines. With the Philippine government spending trillions of pesos annually on goods, services, and infrastructure, PhilGEPS is an essential data source for businesses operating in the Philippine market.

This guide covers how to build an automated PhilGEPS tracking system using proxy infrastructure to monitor tenders, contract awards, and procurement patterns.

Understanding PhilGEPS

Platform Overview

PhilGEPS serves as the single official source of government procurement information in the Philippines. Under Republic Act 9184 (Government Procurement Reform Act), all government agencies must post their procurement opportunities on PhilGEPS.

The platform publishes:

  • Opportunities/Invitations to Bid: Open tenders for goods, services, and infrastructure
  • Award Notices: Information about winning bidders and contract values
  • Procurement Plans: Annual procurement plans of government agencies
  • Supplier Registry: Database of registered government suppliers (PhilGEPS Red and Platinum members)

Data Accessibility

PhilGEPS offers a public search interface that allows anyone to search for procurement opportunities. However, the platform has several characteristics that make automated monitoring challenging:

  • The search interface requires form submissions with specific parameters
  • Results are paginated with limited items per page
  • Session management requires cookie handling
  • The platform experiences heavy traffic during business hours
  • Server response times can be slow, especially for complex queries

Setting Up Proxy Infrastructure for PhilGEPS

Why Philippine Proxies Matter

PhilGEPS is accessible globally, but using Philippine IP addresses provides several advantages:

  • Faster response times from PhilGEPS servers
  • More natural traffic patterns matching Filipino users
  • Reduced risk of IP-based restrictions
  • Access to content that may be optimized for local visitors

DataResearchTools provides Philippine mobile proxies with carrier IPs from Globe, Smart, DITO, and other local carriers. These IPs are recognized as legitimate Philippine traffic by PhilGEPS servers.

Proxy Configuration

class PhilGEPSProxyConfig:
    def __init__(self):
        self.proxy_host = "sea.dataresearchtools.com"
        self.proxy_port = 8080
        self.username = "your_username"
        self.password = "your_password"

    def get_rotating_proxy(self):
        """Get a rotating Philippine proxy."""
        return {
            "http": f"http://{self.username}:{self.password}@{self.proxy_host}:{self.proxy_port}?country=PH",
            "https": f"http://{self.username}:{self.password}@{self.proxy_host}:{self.proxy_port}?country=PH"
        }

    def get_sticky_proxy(self, session_name, duration=600):
        """Get a sticky session Philippine proxy."""
        return {
            "http": f"http://{self.username}:{self.password}@{self.proxy_host}:{self.proxy_port}?country=PH&session={session_name}&duration={duration}",
            "https": f"http://{self.username}:{self.password}@{self.proxy_host}:{self.proxy_port}?country=PH&session={session_name}&duration={duration}"
        }

Building the PhilGEPS Scraper

Searching for Opportunities

PhilGEPS allows searching by various criteria including category, classification, area of delivery, and date range:

import requests
from bs4 import BeautifulSoup
from datetime import datetime, timedelta
import time
import random

class PhilGEPSScraper:
    BASE_URL = "https://www.philgeps.gov.ph"

    def __init__(self, proxy_config):
        self.proxy_config = proxy_config
        self.session = requests.Session()
        self.session.headers.update({
            'User-Agent': 'Mozilla/5.0 (Linux; Android 13; SM-A546B) AppleWebKit/537.36',
            'Accept': 'text/html,application/xhtml+xml',
            'Accept-Language': 'en-PH,en;q=0.9,fil;q=0.8',
        })

    def search_opportunities(self, keyword=None, category=None,
                            date_from=None, date_to=None, page=1):
        """Search PhilGEPS for procurement opportunities."""
        search_url = f"{self.BASE_URL}/PhilGEPS/GEPSNONPILOT/Tender/SplashOpenOpportunitiesUI.aspx"

        proxy = self.proxy_config.get_sticky_proxy(f"search_{page}")

        # First, get the page to extract form tokens
        initial_response = self.session.get(
            search_url, proxies=proxy, timeout=30
        )
        form_data = self._extract_form_data(initial_response.text)

        # Set search parameters
        if keyword:
            form_data['ctl00$ContentPlaceHolder1$txtKeyword'] = keyword
        if category:
            form_data['ctl00$ContentPlaceHolder1$ddlClassification'] = category
        if date_from:
            form_data['ctl00$ContentPlaceHolder1$txtPublishDateFrom'] = date_from
        if date_to:
            form_data['ctl00$ContentPlaceHolder1$txtPublishDateTo'] = date_to

        form_data['ctl00$ContentPlaceHolder1$btnSearch'] = 'Search'

        # Submit search
        response = self.session.post(
            search_url,
            data=form_data,
            proxies=proxy,
            timeout=60
        )

        return self.parse_search_results(response.text)

    def _extract_form_data(self, html):
        """Extract ASP.NET form tokens."""
        soup = BeautifulSoup(html, 'html.parser')
        form_data = {}

        for hidden in soup.find_all('input', type='hidden'):
            name = hidden.get('name', '')
            value = hidden.get('value', '')
            if name:
                form_data[name] = value

        return form_data

    def parse_search_results(self, html):
        """Parse search results page for opportunity summaries."""
        soup = BeautifulSoup(html, 'html.parser')
        opportunities = []

        results_table = soup.find('table', {'id': lambda x: x and 'GridView' in str(x)})
        if not results_table:
            return opportunities

        rows = results_table.find_all('tr')[1:]  # Skip header row
        for row in rows:
            cells = row.find_all('td')
            if len(cells) >= 6:
                opportunity = {
                    'reference': cells[0].get_text(strip=True),
                    'title': cells[1].get_text(strip=True),
                    'procuring_entity': cells[2].get_text(strip=True),
                    'classification': cells[3].get_text(strip=True),
                    'area_of_delivery': cells[4].get_text(strip=True),
                    'approved_budget': cells[5].get_text(strip=True),
                    'publish_date': cells[6].get_text(strip=True) if len(cells) > 6 else '',
                    'closing_date': cells[7].get_text(strip=True) if len(cells) > 7 else '',
                    'detail_link': self._extract_link(cells[1])
                }
                opportunities.append(opportunity)

        return opportunities

    def _extract_link(self, cell):
        """Extract detail page link from a table cell."""
        link = cell.find('a')
        if link and link.get('href'):
            href = link['href']
            if href.startswith('/'):
                return self.BASE_URL + href
            return href
        return None

Fetching Opportunity Details

Each opportunity detail page contains comprehensive procurement information:

def fetch_opportunity_detail(self, detail_url):
    """Fetch full details of a specific opportunity."""
    proxy = self.proxy_config.get_rotating_proxy()

    response = self.session.get(
        detail_url,
        proxies=proxy,
        timeout=30
    )

    return self.parse_detail_page(response.text)

def parse_detail_page(self, html):
    """Extract structured data from opportunity detail page."""
    soup = BeautifulSoup(html, 'html.parser')

    detail = {}

    # Extract key fields from the detail table
    info_table = soup.find('table', class_='table_list')
    if info_table:
        rows = info_table.find_all('tr')
        for row in rows:
            header = row.find('th')
            data = row.find('td')
            if header and data:
                key = header.get_text(strip=True).rstrip(':')
                value = data.get_text(strip=True)
                detail[key] = value

    # Extract attached documents
    doc_section = soup.find('div', {'id': lambda x: x and 'document' in str(x).lower()})
    if doc_section:
        detail['documents'] = []
        for link in doc_section.find_all('a'):
            detail['documents'].append({
                'name': link.get_text(strip=True),
                'url': link.get('href', '')
            })

    return detail

Monitoring Contract Awards

Contract awards provide intelligence on who is winning government business:

def search_awards(self, date_from=None, date_to=None, keyword=None):
    """Search PhilGEPS for contract award notices."""
    awards_url = f"{self.BASE_URL}/PhilGEPS/GEPSNONPILOT/Tender/SplashAwardNoticeUI.aspx"

    proxy = self.proxy_config.get_sticky_proxy("awards_search")

    initial_response = self.session.get(
        awards_url, proxies=proxy, timeout=30
    )
    form_data = self._extract_form_data(initial_response.text)

    if date_from:
        form_data['ctl00$ContentPlaceHolder1$txtAwardDateFrom'] = date_from
    if date_to:
        form_data['ctl00$ContentPlaceHolder1$txtAwardDateTo'] = date_to
    if keyword:
        form_data['ctl00$ContentPlaceHolder1$txtKeyword'] = keyword

    form_data['ctl00$ContentPlaceHolder1$btnSearch'] = 'Search'

    response = self.session.post(
        awards_url,
        data=form_data,
        proxies=proxy,
        timeout=60
    )

    return self.parse_award_results(response.text)

def parse_award_results(self, html):
    """Parse award notice search results."""
    soup = BeautifulSoup(html, 'html.parser')
    awards = []

    results_table = soup.find('table', {'id': lambda x: x and 'GridView' in str(x)})
    if not results_table:
        return awards

    rows = results_table.find_all('tr')[1:]
    for row in rows:
        cells = row.find_all('td')
        if len(cells) >= 5:
            award = {
                'reference': cells[0].get_text(strip=True),
                'title': cells[1].get_text(strip=True),
                'procuring_entity': cells[2].get_text(strip=True),
                'winning_bidder': cells[3].get_text(strip=True),
                'contract_amount': cells[4].get_text(strip=True),
                'award_date': cells[5].get_text(strip=True) if len(cells) > 5 else '',
            }
            awards.append(award)

    return awards

Building an Automated Monitoring Pipeline

Daily Monitoring Workflow

class PhilGEPSMonitor:
    def __init__(self, proxy_config, database, alert_service):
        self.scraper = PhilGEPSScraper(proxy_config)
        self.db = database
        self.alerts = alert_service

    def daily_scan(self):
        """Run daily monitoring cycle."""
        today = datetime.now().strftime('%m/%d/%Y')
        yesterday = (datetime.now() - timedelta(days=1)).strftime('%m/%d/%Y')

        # Scan new opportunities
        new_opps = self.scraper.search_opportunities(
            date_from=yesterday,
            date_to=today
        )

        for opp in new_opps:
            if not self.db.exists(opp['reference']):
                # Fetch full details
                if opp['detail_link']:
                    time.sleep(random.uniform(2, 5))
                    detail = self.scraper.fetch_opportunity_detail(opp['detail_link'])
                    opp.update(detail)

                # Store in database
                self.db.insert_opportunity(opp)

                # Check against alert profiles
                self.check_alerts(opp)

        # Scan new awards
        new_awards = self.scraper.search_awards(
            date_from=yesterday,
            date_to=today
        )

        for award in new_awards:
            if not self.db.award_exists(award['reference']):
                self.db.insert_award(award)

    def check_alerts(self, opportunity):
        """Check opportunity against configured alert profiles."""
        profiles = self.db.get_alert_profiles()

        for profile in profiles:
            if self._matches_profile(opportunity, profile):
                self.alerts.send(
                    recipient=profile['email'],
                    subject=f"New PhilGEPS Opportunity: {opportunity['title'][:50]}",
                    body=self._format_alert(opportunity)
                )

    def _matches_profile(self, opportunity, profile):
        """Check if an opportunity matches an alert profile."""
        text = f"{opportunity.get('title', '')} {opportunity.get('classification', '')}".lower()
        return any(kw.lower() in text for kw in profile.get('keywords', []))

Analyzing PhilGEPS Data

Procurement Spending by Agency

Track which agencies are the biggest spenders and what they buy:

def analyze_agency_spending(db):
    """Analyze procurement spending by government agency."""
    query = """
        SELECT procuring_entity,
               COUNT(*) as tender_count,
               SUM(approved_budget_numeric) as total_budget,
               AVG(approved_budget_numeric) as avg_budget
        FROM opportunities
        WHERE publish_date >= DATE_SUB(NOW(), INTERVAL 12 MONTH)
        GROUP BY procuring_entity
        ORDER BY total_budget DESC
        LIMIT 50
    """
    return db.execute(query)

Category Trends

Identify growing procurement categories:

def analyze_category_trends(db):
    """Analyze procurement trends by category over time."""
    query = """
        SELECT classification,
               DATE_FORMAT(publish_date, '%Y-%m') as month,
               COUNT(*) as opportunity_count,
               SUM(approved_budget_numeric) as total_value
        FROM opportunities
        WHERE publish_date >= DATE_SUB(NOW(), INTERVAL 24 MONTH)
        GROUP BY classification, month
        ORDER BY classification, month
    """
    return db.execute(query)

Competitive Intelligence

Analyze contract awards to understand the competitive landscape:

  • Which companies win the most government contracts
  • Average contract sizes by category
  • Agency preferences for specific vendors
  • Geographic distribution of contract winners

Best Practices for PhilGEPS Scraping

Respectful Scraping

  • Limit requests to 1 every 3-5 seconds
  • Avoid scraping during peak hours (9 AM – 12 PM PHT)
  • Cache results aggressively to minimize repeat requests
  • Use conditional requests where possible

Data Quality

  • Validate all extracted data against expected formats
  • Handle encoding issues (Filipino names and terms may contain special characters)
  • Implement duplicate detection based on reference numbers
  • Track data freshness and flag stale records

Error Resilience

PhilGEPS can be slow and occasionally unstable. Build resilient scrapers:

  • Implement exponential backoff for retries
  • Set reasonable timeouts (30-60 seconds)
  • Handle session expiration gracefully
  • Log all errors for debugging

DataResearchTools for PhilGEPS Monitoring

DataResearchTools offers Philippine mobile proxies optimized for government portal access:

  • Globe, Smart, and DITO carrier IPs for authentic Philippine traffic
  • Sticky sessions for maintaining ASP.NET form state
  • Automatic failover when individual IPs encounter issues
  • Bandwidth-efficient routing for cost-effective high-volume monitoring
  • 24/7 availability for round-the-clock procurement tracking

Our Philippine proxy infrastructure handles the unique challenges of PhilGEPS, including slow server response times, session-heavy navigation, and ASP.NET ViewState management.

Conclusion

Automated PhilGEPS tracking transforms how businesses approach Philippine government procurement. Instead of manually checking the portal daily, a proxy-powered monitoring system ensures you never miss a relevant opportunity.

DataResearchTools provides the reliable Philippine proxy infrastructure needed to maintain continuous access to PhilGEPS data. Start with a focused monitoring scope covering your target categories and agencies, then expand as you build confidence in your data pipeline. The businesses that systematically monitor and respond to PhilGEPS opportunities are the ones that consistently win government contracts in the Philippines.


Related Reading

Scroll to Top