Proxies for Indeed and LinkedIn Recruiter Multi-Account Automation

Proxies for Indeed and LinkedIn Recruiter Multi-Account Automation

Recruiting agencies and in-house talent teams face a constant challenge: sourcing enough qualified candidates to fill open positions. LinkedIn Recruiter and Indeed are the two dominant platforms, together covering the vast majority of job seekers and passive candidates. Both platforms impose strict limits on searches, InMails, and profile views that constrain recruiting throughput.

Multi-account automation with mobile proxies allows recruiting teams to scale their sourcing operations while maintaining account safety. This guide covers the technical setup, safe operating practices, and candidate data extraction strategies for both platforms.

Why Recruiters Need Proxies

The economics of recruiting demand scale. A single recruiter seat on LinkedIn Recruiter costs $8,000-$12,000 per year and limits profile views and InMails. Indeed Resume access costs $100-$500 per month per seat with daily search limits.

For agencies filling dozens of positions simultaneously across multiple industries, these per-seat limits create bottlenecks. Running multiple accounts through properly configured proxy infrastructure multiplies sourcing capacity proportionally.

Key Requirements

  • Sticky sessions — Both platforms require consistent IP addresses throughout a session.
  • Geographic matching — Proxy location must match the account’s registration region.
  • One account per IP — Never share proxy IPs between accounts on the same platform.
  • Mobile IPs — Both LinkedIn and Indeed flag datacenter and many residential proxy IPs.

LinkedIn Recruiter Automation

Account Architecture

Set up each LinkedIn Recruiter account with isolation:

from dataclasses import dataclass
from typing import Optional

@dataclass
class RecruiterAccount:
    email: str
    password: str
    proxy_session_id: str
    proxy_geo: str
    browser_profile_dir: str
    daily_search_limit: int = 30
    daily_inmail_limit: int = 25
    daily_profile_view_limit: int = 100
    searches_today: int = 0
    inmails_today: int = 0
    profile_views_today: int = 0

class AccountPool:
    """Manage pool of LinkedIn Recruiter accounts"""

    def __init__(self, accounts: list, proxy_gateway: str):
        self.accounts = accounts
        self.proxy_gateway = proxy_gateway

    def get_available_account(self, activity_type: str):
        """Get an account that hasn't hit its daily limit"""
        limit_field = f"daily_{activity_type}_limit"
        count_field = f"{activity_type}_today"

        for account in self.accounts:
            current = getattr(account, count_field)
            limit = getattr(account, limit_field)
            if current < limit:
                return account

        return None  # All accounts at capacity

    def get_proxy_for_account(self, account: RecruiterAccount):
        """Get sticky proxy configuration for account"""
        return {
            "server": f"http://{self.proxy_gateway}",
            "username": f"user-session-{account.proxy_session_id}",
            "password": "pass",
        }

    def reset_daily_counts(self):
        """Reset all daily counters (run at midnight)"""
        for account in self.accounts:
            account.searches_today = 0
            account.inmails_today = 0
            account.profile_views_today = 0

Candidate Search Automation

from playwright.async_api import async_playwright
import asyncio
import random

async def search_recruiter_candidates(account, proxy_config, search_criteria):
    """Search LinkedIn Recruiter for candidates"""
    async with async_playwright() as p:
        browser = await p.chromium.launch_persistent_context(
            user_data_dir=account.browser_profile_dir,
            proxy=proxy_config,
            viewport={"width": 1920, "height": 1080},
            locale="en-US",
        )

        page = browser.pages[0] if browser.pages else await browser.new_page()

        # Navigate to Recruiter search
        await page.goto("https://www.linkedin.com/talent/search")
        await page.wait_for_timeout(random.randint(3000, 6000))

        # Apply search filters
        if search_criteria.get("keywords"):
            keyword_input = await page.wait_for_selector('[placeholder*="Search"]')
            await keyword_input.fill(search_criteria["keywords"])
            await page.wait_for_timeout(random.randint(1000, 2000))
            await page.keyboard.press("Enter")

        await page.wait_for_timeout(random.randint(4000, 8000))

        # Extract candidate cards
        candidates = []
        cards = await page.query_selector_all('[class*="search-result"]')

        for card in cards:
            candidate = await extract_candidate_card(card)
            if candidate:
                candidates.append(candidate)
                account.profile_views_today += 1

                if account.profile_views_today >= account.daily_profile_view_limit:
                    break

            await page.wait_for_timeout(random.randint(2000, 5000))

        account.searches_today += 1
        await browser.close()

        return candidates


async def extract_candidate_card(card):
    """Extract candidate info from a search result card"""
    candidate = {}

    name_el = await card.query_selector('[class*="name"]')
    if name_el:
        candidate['name'] = (await name_el.inner_text()).strip()

    title_el = await card.query_selector('[class*="headline"]')
    if title_el:
        candidate['current_title'] = (await title_el.inner_text()).strip()

    company_el = await card.query_selector('[class*="company"]')
    if company_el:
        candidate['current_company'] = (await company_el.inner_text()).strip()

    location_el = await card.query_selector('[class*="location"]')
    if location_el:
        candidate['location'] = (await location_el.inner_text()).strip()

    link_el = await card.query_selector('a[href*="/talent/profile/"]')
    if link_el:
        candidate['recruiter_url'] = await link_el.get_attribute('href')

    return candidate if candidate.get('name') else None

Indeed Resume Scraping

Indeed has a separate product called “Indeed Resume” that gives access to candidate resumes. It also offers the standard job posting and applicant tracking.

Indeed Resume Search

async def search_indeed_resumes(query, location, proxy_config, max_pages=5):
    """Search Indeed Resume database"""
    async with async_playwright() as p:
        browser = await p.chromium.launch(
            proxy=proxy_config,
            headless=False,
        )
        context = await browser.new_context(
            viewport={"width": 1920, "height": 1080},
            user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36",
        )
        page = await context.new_page()

        candidates = []

        for page_num in range(max_pages):
            start = page_num * 50
            url = f"https://resumes.indeed.com/search?q={query}&l={location}&start={start}"

            await page.goto(url, wait_until="networkidle")
            await page.wait_for_timeout(random.randint(3000, 7000))

            # Check for blocks
            if await page.query_selector('text="Please verify you are a human"'):
                print("CAPTCHA detected on Indeed")
                break

            # Extract resume cards
            cards = await page.query_selector_all('[class*="resum-card"]')

            for card in cards:
                candidate = {}
                name_el = await card.query_selector('[class*="name"]')
                if name_el:
                    candidate['name'] = (await name_el.inner_text()).strip()

                title_el = await card.query_selector('[class*="title"]')
                if title_el:
                    candidate['title'] = (await title_el.inner_text()).strip()

                location_el = await card.query_selector('[class*="location"]')
                if location_el:
                    candidate['location'] = (await location_el.inner_text()).strip()

                experience_el = await card.query_selector('[class*="experience"]')
                if experience_el:
                    candidate['experience'] = (await experience_el.inner_text()).strip()

                if candidate.get('name'):
                    candidates.append(candidate)

            await page.wait_for_timeout(random.randint(5000, 12000))

        await browser.close()
        return candidates

Indeed Job Applicant Monitoring

For employers, monitoring who applies to competitor job postings provides sourcing intelligence:

async def monitor_competitor_applicants(job_urls, proxy_config):
    """Monitor competitor job postings for potential candidates"""
    # This approach monitors job posting activity, not applicant data directly
    # Track when postings are updated, removed, or reposted

    job_status = {}

    async with async_playwright() as p:
        browser = await p.chromium.launch(proxy=proxy_config)
        page = await browser.new_page()

        for url in job_urls:
            await page.goto(url, wait_until="networkidle")
            await page.wait_for_timeout(random.randint(2000, 5000))

            status = {}
            # Check if posting is still active
            expired = await page.query_selector('text="This job has expired"')
            status['active'] = expired is None

            # Check posting date
            date_el = await page.query_selector('[class*="date"]')
            if date_el:
                status['posted_date'] = (await date_el.inner_text()).strip()

            # Check application count (sometimes visible)
            apps_el = await page.query_selector('text=/\\d+ applicant/')
            if apps_el:
                status['applicant_count'] = (await apps_el.inner_text()).strip()

            job_status[url] = status

        await browser.close()

    return job_status

Safe Operating Limits

Maintaining account safety is critical. These limits apply per account per day. For more on proxy rotation strategies that keep accounts safe, see our proxy glossary.

LinkedIn Recruiter

ActivityConservativeModerateAggressive
Profile views50100150
Searches153050
InMails152540
Connection requests102030

Indeed Resume

ActivityConservativeModerateAggressive
Resume views100200300
Search pages204060
Contact reveals305080

Warm-Up Schedule

New accounts need gradual ramp-up:

WARMUP_SCHEDULE = {
    # Week: (profile_views, searches, inmails)
    1: (10, 5, 0),    # Minimal activity, manual only
    2: (25, 10, 5),   # Light automation
    3: (50, 20, 15),  # Moderate automation
    4: (80, 25, 20),  # Near full capacity
    5: (100, 30, 25), # Full capacity
}

def get_daily_limits(account_age_weeks):
    """Get appropriate daily limits based on account age"""
    week = min(account_age_weeks, max(WARMUP_SCHEDULE.keys()))
    limits = WARMUP_SCHEDULE.get(week, WARMUP_SCHEDULE[5])
    return {
        "profile_views": limits[0],
        "searches": limits[1],
        "inmails": limits[2],
    }

Candidate Data Pipeline

Structure extracted candidate data for your ATS (Applicant Tracking System):

import csv
from datetime import datetime

class CandidatePipeline:
    """Process and store candidate data from multiple sources"""

    def __init__(self, db_connection):
        self.db = db_connection

    def process_candidate(self, raw_data, source):
        """Clean and store a candidate record"""
        candidate = {
            "name": raw_data.get("name", "").strip(),
            "current_title": raw_data.get("current_title", "").strip(),
            "current_company": raw_data.get("current_company", "").strip(),
            "location": raw_data.get("location", "").strip(),
            "source": source,
            "source_url": raw_data.get("recruiter_url") or raw_data.get("indeed_url"),
            "scraped_at": datetime.utcnow().isoformat(),
        }

        # Deduplicate by name + company
        existing = self.find_existing(candidate["name"], candidate["current_company"])
        if existing:
            # Update with new source info
            self.update_candidate(existing["id"], candidate)
        else:
            self.insert_candidate(candidate)

        return candidate

    def export_for_ats(self, candidates, output_file):
        """Export candidates in ATS-importable format"""
        fieldnames = [
            'name', 'current_title', 'current_company',
            'location', 'source', 'source_url', 'scraped_at'
        ]

        with open(output_file, 'w', newline='', encoding='utf-8') as f:
            writer = csv.DictWriter(f, fieldnames=fieldnames, extrasaction='ignore')
            writer.writeheader()
            for candidate in candidates:
                writer.writerow(candidate)

Enriching Candidate Profiles

After finding candidates on LinkedIn or Indeed, enrich their profiles with data from company websites and social profiles using your web scraping proxy infrastructure:

async def enrich_candidate(candidate, proxy_url):
    """Enrich candidate data with additional sources"""
    enriched = candidate.copy()

    # Look up company website for context
    if candidate.get("current_company"):
        company_data = await scrape_company_info(
            candidate["current_company"],
            proxy_url
        )
        enriched["company_size"] = company_data.get("employee_count")
        enriched["company_industry"] = company_data.get("industry")
        enriched["company_website"] = company_data.get("website")

    # Search for GitHub profile (for engineering roles)
    if "engineer" in candidate.get("current_title", "").lower():
        github = await search_github(candidate["name"], proxy_url)
        if github:
            enriched["github_url"] = github.get("url")
            enriched["github_repos"] = github.get("public_repos")

    return enriched

Compliance Considerations

Recruiting automation operates under specific legal frameworks:

  • EEOC compliance — Ensure automated sourcing does not systematically exclude protected groups.
  • GDPR/CCPA — Candidate data collection must comply with privacy regulations. Provide data deletion mechanisms.
  • Platform ToS — Both LinkedIn and Indeed prohibit automated scraping. Understand the legal risks.
  • Data retention — Establish clear policies for how long candidate data is stored.
  • Candidate consent — When contacting candidates, be transparent about how their information was obtained.

Conclusion

Multi-account automation with mobile proxies transforms recruiting from a manually intensive process into a scalable data operation. The key is balancing throughput with account safety — conservative daily limits, proper warm-up periods, and dedicated proxy IPs per account. Combined with structured data pipelines and ATS integration, this approach lets recruiting teams source candidates at volumes that would require a much larger team operating manually.


Related Reading

Scroll to Top