How to Use Proxies with Hunter.io and Email Verification Tools
Finding email addresses is only half the battle in B2B outreach. Sending emails to invalid addresses destroys your sender reputation, increases bounce rates, and can get your domain blacklisted. Email finding tools like Hunter.io discover contact addresses, while verification tools like NeverBounce and ZeroBounce confirm those addresses are deliverable.
Both types of tools impose rate limits and per-query pricing that constrains high-volume operations. Mobile proxies enable you to scale these tools effectively by managing multiple accounts, distributing API calls, and supplementing paid tools with direct email discovery from company websites.
Understanding the Email Finding and Verification Stack
A production email outreach pipeline has three layers:
- Discovery — Find potential email addresses for target contacts
- Verification — Confirm each email is valid and deliverable
- Warm-up and Delivery — Send emails through warmed-up infrastructure
Proxies play a role in all three layers, but this guide focuses on discovery and verification.
Popular Tools and Their Limits
| Tool | Free Tier | Paid Rate Limit | API Support |
|---|---|---|---|
| Hunter.io | 25 searches/month | 500/month ($49) | Yes |
| NeverBounce | Pay per verification | 10,000/hour | Yes |
| ZeroBounce | 100 free/month | Custom | Yes |
| Snov.io | 50 credits/month | Varies | Yes |
| Clearout | 100 free/month | Custom | Yes |
Scaling Hunter.io with Proxies
Hunter.io provides two core functions: Domain Search (find all emails at a domain) and Email Finder (find a specific person’s email). Both consume credits.
Multi-Account Management
Run multiple Hunter.io free accounts, each with its own proxy IP and browser profile:
from playwright.sync_api import sync_playwright
import json
import time
import random
class HunterAccountManager:
"""Manage multiple Hunter.io accounts"""
def __init__(self, accounts, proxy_pool):
self.accounts = accounts # List of {email, password, api_key}
self.proxy_pool = proxy_pool
self.current_index = 0
def get_next_account(self):
"""Round-robin through accounts"""
account = self.accounts[self.current_index]
proxy = self.proxy_pool[self.current_index % len(self.proxy_pool)]
self.current_index = (self.current_index + 1) % len(self.accounts)
return account, proxy
def domain_search(self, domain):
"""Search Hunter.io for emails at a domain"""
account, proxy = self.get_next_account()
response = requests.get(
"https://api.hunter.io/v2/domain-search",
params={
"domain": domain,
"api_key": account["api_key"],
},
proxies={"https": proxy},
timeout=15,
)
if response.status_code == 200:
data = response.json()
return {
"domain": domain,
"emails": [
{
"email": e["value"],
"type": e.get("type"),
"confidence": e.get("confidence"),
"first_name": e.get("first_name"),
"last_name": e.get("last_name"),
"position": e.get("position"),
}
for e in data.get("data", {}).get("emails", [])
],
"pattern": data.get("data", {}).get("pattern"),
}
elif response.status_code == 429:
# Rate limited - switch account
return self.domain_search(domain)
return None
def email_finder(self, domain, first_name, last_name):
"""Find a specific person's email via Hunter.io"""
account, proxy = self.get_next_account()
response = requests.get(
"https://api.hunter.io/v2/email-finder",
params={
"domain": domain,
"first_name": first_name,
"last_name": last_name,
"api_key": account["api_key"],
},
proxies={"https": proxy},
timeout=15,
)
if response.status_code == 200:
data = response.json().get("data", {})
return {
"email": data.get("email"),
"confidence": data.get("score"),
"sources": data.get("sources", []),
}
return NoneWeb Interface Scraping
When API credits are exhausted, scrape Hunter.io’s web interface for limited data:
async def hunter_web_search(domain, proxy_config):
"""Scrape Hunter.io web interface (limited data without login)"""
from playwright.async_api import async_playwright
async with async_playwright() as p:
browser = await p.chromium.launch(proxy=proxy_config)
page = await browser.new_page()
await page.goto(f"https://hunter.io/search/{domain}")
await page.wait_for_timeout(random.randint(3000, 6000))
# Extract visible email pattern
pattern_el = await page.query_selector('[class*="email-pattern"]')
pattern = await pattern_el.inner_text() if pattern_el else None
# Extract publicly visible emails
email_els = await page.query_selector_all('[data-email]')
emails = []
for el in email_els:
email = await el.get_attribute('data-email')
if email:
emails.append(email)
await browser.close()
return {"domain": domain, "pattern": pattern, "emails": emails}Direct Email Discovery Without Paid Tools
Reduce dependency on paid tools by discovering emails directly from company websites. For a deeper look at the proxy concepts involved, check our proxy glossary.
Pattern-Based Email Generation
Hunter.io’s most valuable data point is the email pattern (e.g., {first}.{last}@company.com). You can discover patterns yourself:
import itertools
def generate_email_candidates(first_name, last_name, domain):
"""Generate email address candidates based on common patterns"""
f = first_name.lower()
l = last_name.lower()
fi = f[0] # First initial
li = l[0] # Last initial
patterns = [
f"{f}@{domain}", # john@company.com
f"{l}@{domain}", # smith@company.com
f"{f}.{l}@{domain}", # john.smith@company.com
f"{f}{l}@{domain}", # johnsmith@company.com
f"{fi}{l}@{domain}", # jsmith@company.com
f"{f}{li}@{domain}", # johns@company.com
f"{fi}.{l}@{domain}", # j.smith@company.com
f"{f}_{l}@{domain}", # john_smith@company.com
f"{f}-{l}@{domain}", # john-smith@company.com
f"{l}.{f}@{domain}", # smith.john@company.com
f"{l}{fi}@{domain}", # smithj@company.com
f"{fi}{li}@{domain}", # js@company.com
]
return patterns
def detect_pattern(known_emails, domain):
"""Detect email pattern from known company emails"""
patterns_found = {}
for email in known_emails:
local = email.split('@')[0].lower()
# Try to match against common patterns
# This requires knowing the person's name associated with the email
# Often available from the same page where the email was found
patterns_found[local] = email
return patterns_foundSMTP Verification
Verify email candidates without sending actual emails:
import smtplib
import dns.resolver
def verify_email_smtp(email, proxy_url=None):
"""Verify email existence via SMTP handshake"""
domain = email.split('@')[1]
try:
# Get MX records
mx_records = dns.resolver.resolve(domain, 'MX')
mx_host = str(mx_records[0].exchange).rstrip('.')
# Connect to mail server
server = smtplib.SMTP(timeout=10)
server.connect(mx_host, 25)
server.helo('verify.example.com')
server.mail('verify@example.com')
code, message = server.rcpt(email)
server.quit()
# 250 = valid, 550 = invalid
return {
"email": email,
"valid": code == 250,
"smtp_code": code,
"message": message.decode(),
}
except Exception as e:
return {
"email": email,
"valid": None,
"error": str(e),
}Important: Many mail servers (especially Google Workspace and Microsoft 365) no longer respond accurately to SMTP verification. Use this as one signal among several.
Bulk Email Verification Pipeline
For large-scale verification, distribute requests across multiple verification services:
import asyncio
import aiohttp
class BulkEmailVerifier:
"""Verify emails using multiple services with proxy rotation"""
def __init__(self, proxy_pool):
self.proxy_pool = proxy_pool
self.results = {}
async def verify_batch(self, emails, service="neverbounce"):
"""Verify a batch of emails"""
semaphore = asyncio.Semaphore(10)
tasks = []
for email in emails:
proxy = self.proxy_pool.get_next()
tasks.append(self.verify_single(email, proxy, service, semaphore))
results = await asyncio.gather(*tasks)
return results
async def verify_single(self, email, proxy_url, service, semaphore):
"""Verify a single email address"""
async with semaphore:
if service == "neverbounce":
return await self.verify_neverbounce(email, proxy_url)
elif service == "zerobounce":
return await self.verify_zerobounce(email, proxy_url)
elif service == "smtp":
return verify_email_smtp(email, proxy_url)
async def verify_neverbounce(self, email, proxy_url):
"""Verify via NeverBounce API"""
async with aiohttp.ClientSession() as session:
async with session.get(
"https://api.neverbounce.com/v4/single/check",
params={
"key": "YOUR_API_KEY",
"email": email,
},
proxy=proxy_url,
timeout=aiohttp.ClientTimeout(total=15)
) as response:
data = await response.json()
return {
"email": email,
"result": data.get("result"),
"valid": data.get("result") == "valid",
}
async def verify_zerobounce(self, email, proxy_url):
"""Verify via ZeroBounce API"""
async with aiohttp.ClientSession() as session:
async with session.get(
"https://api.zerobounce.net/v2/validate",
params={
"api_key": "YOUR_API_KEY",
"email": email,
},
proxy=proxy_url,
timeout=aiohttp.ClientTimeout(total=15)
) as response:
data = await response.json()
return {
"email": email,
"status": data.get("status"),
"valid": data.get("status") == "valid",
"sub_status": data.get("sub_status"),
}Verification Result Classification
Email verification returns multiple status codes. Map them to actionable categories:
def classify_verification_result(result):
"""Classify email verification result into actionable categories"""
status = result.get("result") or result.get("status", "").lower()
classification = {
"valid": {
"action": "send",
"priority": 1,
"description": "Confirmed deliverable"
},
"catchall": {
"action": "send_cautious",
"priority": 2,
"description": "Domain accepts all - may or may not exist"
},
"unknown": {
"action": "verify_alternate",
"priority": 3,
"description": "Could not confirm - try alternate method"
},
"invalid": {
"action": "discard",
"priority": 4,
"description": "Does not exist"
},
"disposable": {
"action": "discard",
"priority": 4,
"description": "Temporary email service"
},
}
return classification.get(status, {
"action": "review",
"priority": 3,
"description": f"Unknown status: {status}"
})Cost Optimization Strategies
Verification costs add up at scale. Optimize with a tiered approach:
- Syntax check (free) — Filter out obviously malformed addresses first.
- MX record check (free) — Confirm the domain has mail servers.
- SMTP handshake (free) — Attempt server-level verification.
- Paid API verification — Only for emails that pass steps 1-3 but remain uncertain.
def tiered_verification(email):
"""Multi-tier verification to minimize paid API usage"""
import re
# Tier 1: Syntax check (free)
if not re.match(r'^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$', email):
return {"email": email, "valid": False, "reason": "Invalid syntax", "tier": 1}
# Tier 2: MX record check (free)
domain = email.split('@')[1]
try:
mx = dns.resolver.resolve(domain, 'MX')
if not mx:
return {"email": email, "valid": False, "reason": "No MX records", "tier": 2}
except Exception:
return {"email": email, "valid": False, "reason": "DNS failure", "tier": 2}
# Tier 3: SMTP check (free)
smtp_result = verify_email_smtp(email)
if smtp_result.get("valid") is False:
return {"email": email, "valid": False, "reason": "SMTP rejected", "tier": 3}
if smtp_result.get("valid") is True:
return {"email": email, "valid": True, "reason": "SMTP confirmed", "tier": 3}
# Tier 4: Paid API (only if SMTP was inconclusive)
return {"email": email, "valid": None, "reason": "Needs API verification", "tier": 4}Integration with Web Scraping Pipeline
Combine email verification with your web scraping proxy infrastructure to create a complete pipeline from discovery to verified contact:
class EmailPipeline:
"""End-to-end email discovery and verification"""
def __init__(self, hunter_manager, verifier, proxy_pool):
self.hunter = hunter_manager
self.verifier = verifier
self.proxy_pool = proxy_pool
async def process_company(self, company_domain, contact_name=None):
"""Full pipeline for a single company"""
result = {"domain": company_domain, "emails": []}
# Step 1: Hunter.io domain search
hunter_data = self.hunter.domain_search(company_domain)
if hunter_data:
result["pattern"] = hunter_data.get("pattern")
for email_data in hunter_data.get("emails", []):
result["emails"].append({
"address": email_data["email"],
"source": "hunter",
"confidence": email_data.get("confidence"),
})
# Step 2: Website scraping fallback
if not result["emails"]:
proxy = self.proxy_pool.get_next()
scraped = await scrape_website_emails(company_domain, proxy)
for email in scraped:
result["emails"].append({
"address": email,
"source": "website_scrape",
"confidence": 80,
})
# Step 3: Verify all discovered emails
for email_entry in result["emails"]:
verification = tiered_verification(email_entry["address"])
email_entry["verified"] = verification.get("valid")
email_entry["verification_tier"] = verification.get("tier")
return resultConclusion
Email finding and verification at scale requires a combination of paid tools, direct scraping, and multi-tier verification. Mobile proxies enable you to manage multiple accounts across Hunter.io and other email finding services, while also powering direct email discovery from company websites. The tiered verification approach minimizes costs by filtering out invalid addresses through free methods before resorting to paid APIs. Build this pipeline once, and it becomes the foundation of every outbound sales campaign.
- How to Build an Automated Lead Scraping Pipeline with Proxies
- Building a B2B Contact Enrichment Pipeline with Mobile Proxies
- How to Scrape Job Listings at Scale with Rotating Proxies
- Proxies for HR Tech: Salary Benchmarking & Talent Intelligence
- How to Scrape AliExpress Product Data Without Getting Blocked
- Amazon Buy Box Monitoring: Proxy Setup for Continuous Tracking
- How to Build an Automated Lead Scraping Pipeline with Proxies
- Building a B2B Contact Enrichment Pipeline with Mobile Proxies
- How to Scrape Job Listings at Scale with Rotating Proxies
- Proxies for HR Tech: Salary Benchmarking & Talent Intelligence
- aiohttp + BeautifulSoup: Async Python Scraping
- How to Scrape AliExpress Product Data Without Getting Blocked
- How to Build an Automated Lead Scraping Pipeline with Proxies
- Building a B2B Contact Enrichment Pipeline with Mobile Proxies
- How to Scrape Job Listings at Scale with Rotating Proxies
- Proxies for HR Tech: Salary Benchmarking & Talent Intelligence
- aiohttp + BeautifulSoup: Async Python Scraping
- How to Scrape AliExpress Product Data Without Getting Blocked
- How to Build an Automated Lead Scraping Pipeline with Proxies
- Building a B2B Contact Enrichment Pipeline with Mobile Proxies
- How to Scrape Job Listings at Scale with Rotating Proxies
- Proxies for HR Tech: Salary Benchmarking & Talent Intelligence
- aiohttp + BeautifulSoup: Async Python Scraping
- How to Scrape AliExpress Product Data Without Getting Blocked
Related Reading
- How to Build an Automated Lead Scraping Pipeline with Proxies
- Building a B2B Contact Enrichment Pipeline with Mobile Proxies
- How to Scrape Job Listings at Scale with Rotating Proxies
- Proxies for HR Tech: Salary Benchmarking & Talent Intelligence
- aiohttp + BeautifulSoup: Async Python Scraping
- How to Scrape AliExpress Product Data Without Getting Blocked