How to Use Proxies with Browser-Use (Agentic AI Web Scraping)

How to Use Proxies with Browser-Use (Agentic AI Web Scraping)

browser-use is the python library that lets a language model drive a real chromium browser. it works great out of the box, but the moment you point it at a site that fingerprints aggressively (linkedin, indeed, amazon, facebook), your agent’s session dies in 2-3 page loads. a proxy fixes that. this tutorial shows the working setup in under 200 lines.

what is browser-use

browser-use wraps playwright and exposes a high-level api the llm can call. you give it a goal in natural language (“find the cheapest flight from singapore to tokyo on march 20”), and it clicks, types, scrolls, and extracts. the project is open-source at github.com/browser-use/browser-use and as of may 2026 it sits at version 0.3.x with weekly releases.

if you’ve never used it, our headless browser automation guide covers the chromium fundamentals first.

why you need a proxy with browser-use

three reasons:

(1) your home or datacenter ip gets flagged within minutes on protected sites. browser-use makes thousands of requests per session if the agent is exploring.

(2) geo-restricted content. asking the agent to “compare amazon prices in the us, uk, japan” requires three different residential exits.

(3) parallel agents. running 10 agents from the same ip is the fastest way to a captcha wall.

we benchmarked which proxies actually survive browser-use sessions in our browser-use and operator proxy comparison. short version: residential mobile beats datacenter for protected sites, datacenter is fine for everything else.

installing browser-use

pip install browser-use
playwright install chromium

you need python 3.11+. browser-use uses async, so all examples below run inside asyncio.run(...).

the simplest possible proxy setup

browser-use exposes a BrowserContextConfig that accepts a chromium proxy block. here’s the minimum:

import asyncio
from browser_use import Agent, Browser, BrowserConfig
from langchain_openai import ChatOpenAI

async def main():
    browser = Browser(
        config=BrowserConfig(
            proxy={
                "server": "http://gate.dataresearchtools.com:8000",
                "username": "user-session-abc123",
                "password": "your_password",
            }
        )
    )

    agent = Agent(
        task="go to httpbin.org/ip and tell me the ip you see",
        llm=ChatOpenAI(model="gpt-4o"),
        browser=browser,
    )

    result = await agent.run()
    print(result)
    await browser.close()

asyncio.run(main())

if you see your proxy’s ip in the output, you’re done. if you see your home ip, the proxy block didn’t apply. usually a typo in the server url.

sticky session vs rotating

most residential providers expose two flavors of credentials. a sticky session keeps the same exit ip for a fixed window (10-30 minutes typical). a rotating session swaps the ip on every request.

for browser-use, you want sticky. the agent navigates, clicks, fills forms across multiple pages within a single task. if the ip rotates mid-task, you’ll fail captchas, lose login cookies, and confuse the target site’s rate limiting in ways that look more bot-like, not less.

proxy={
    "server": "http://gate.provider.com:8000",
    # session-id pinned for 30 minutes
    "username": "user-country-us-session-xyz789",
    "password": "your_password",
}

format varies per provider. bright data uses brd-customer-XXX-zone-residential-session-YYY, oxylabs uses customer-USER-cc-us-sessid-XYZ. check your dashboard.

adding country and city targeting

agent tasks often need a specific geo. drop the country code into the username:

proxy={
    "server": "http://gate.provider.com:8000",
    "username": "user-country-jp-city-tokyo-session-abc",
    "password": "your_password",
}

verify with a quick check before the real task:

agent = Agent(
    task="go to ifconfig.co and report the country and city shown",
    llm=ChatOpenAI(model="gpt-4o"),
    browser=browser,
)

if the agent reports tokyo, japan, you’re geo-targeted correctly.

handling auth challenges

some providers require you to whitelist your client ip instead of using user/pass. that breaks if your agent runs from a serverless function with a changing ip. switch to user/pass auth in the dashboard before debugging anything.

if you see a chromium error like ERR_PROXY_CONNECTION_FAILED, the credentials are wrong or your account has zero balance. log into the provider, check the gateway url is the current one, and try again.

per-tab proxy with multi-context

a single agent can run multiple tabs, each with its own proxy. this is how you compare amazon.com vs amazon.co.jp in one task.

from browser_use import Browser, BrowserConfig

browser = Browser(config=BrowserConfig())

us_context = await browser.new_context(
    proxy={"server": "http://gate.provider.com:8000",
           "username": "user-country-us-session-1",
           "password": "pwd"}
)

jp_context = await browser.new_context(
    proxy={"server": "http://gate.provider.com:8000",
           "username": "user-country-jp-session-2",
           "password": "pwd"}
)

then attach each context to its own agent task and run them concurrently with asyncio.gather.

debugging: confirm the proxy is actually used

when nothing seems to work, run this 5-line check first:

import requests

resp = requests.get(
    "https://api.ipify.org?format=json",
    proxies={
        "http": "http://user:pwd@gate.provider.com:8000",
        "https": "http://user:pwd@gate.provider.com:8000",
    },
    timeout=10,
)
print(resp.json())

if requests can hit the proxy and gets back the right ip, the credentials and gateway are correct. then the bug is in your browser-use config, not your proxy account. saves an hour of staring at chromium logs.

rotating ips between tasks (not within a task)

if you want each new agent task to get a fresh ip but keep the ip stable inside the task, increment the session id between runs:

import uuid

def make_proxy():
    return {
        "server": "http://gate.provider.com:8000",
        "username": f"user-session-{uuid.uuid4().hex[:8]}",
        "password": "pwd",
    }

for task in tasks:
    browser = Browser(config=BrowserConfig(proxy=make_proxy()))
    agent = Agent(task=task, llm=llm, browser=browser)
    await agent.run()
    await browser.close()

clean, simple, and survives the longest scraping sessions.

handling captchas

browser-use’s llm tries to solve captchas itself. it fails on hcaptcha and recaptcha v3 most of the time. for production, hand off captchas to a solver:

from browser_use import Agent

agent = Agent(
    task="...",
    llm=llm,
    browser=browser,
    extend_system_message=(
        "if you see a captcha, do not try to solve it. "
        "call solve_captcha(image_url) and wait."
    ),
)

then wire solve_captcha to capsolver or 2captcha as a custom tool. cheaper than burning gpt-4o tokens on a recaptcha grid.

real-world setup for protected sites

linkedin, amazon, indeed, and similar sites profile fingerprints aggressively. residential alone is not enough. the working stack:

  • mobile or residential rotating proxy with sticky 10-min sessions
  • chromium launched with --disable-blink-features=AutomationControlled
  • a real user-agent string that matches the chromium version
  • realistic viewport (1920×1080, not the default 1280×720)
  • 2-3 second random delays between actions
browser = Browser(
    config=BrowserConfig(
        proxy={...},
        chrome_args=[
            "--disable-blink-features=AutomationControlled",
            "--window-size=1920,1080",
        ],
        user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36",
    )
)

this combination passes most fingerprint checks in 2026.

frequently asked questions

does browser-use support socks5 proxies?

yes, but with caveats. chromium accepts socks5:// in the server field but ignores user/pass auth on socks5. use http proxies if your provider requires authentication.

can i use free proxies with browser-use?

technically yes, in practice no. free proxies are slow, blocked everywhere worth scraping, and often middlemen. you’ll waste more in llm tokens retrying failed pages than a paid proxy costs.

how much does a browser-use scraping session cost in proxy bandwidth?

a typical 5-minute browsing task uses 50-150mb. residential at $4/gb means 20-60 cents per task. mobile at $8/gb roughly doubles that.

why do my agents get captchas even with residential proxies?

three usual culprits: ip is on a residential pool but the asn looks datacenter, your browser fingerprint is too clean, or you’re hitting the same domain too fast across multiple agents.

can i rotate proxies inside a single task?

you can but you shouldn’t. mid-task ip rotation breaks session cookies and triggers more captchas, not fewer.

what’s the cheapest proxy that works with browser-use?

isp proxies. roughly $1-2/gb, faster than residential, and pass most fingerprint checks except on the most paranoid sites.

final thoughts

a proxy is the smallest config change that doubles a browser-use agent’s survival rate. start with residential sticky sessions, add country targeting when needed, and pre-flight every credential change with the 5-line requests test before you fight chromium. once it works, it works for thousands of tasks.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top

Resources

Proxy Signals Podcast
Operator-level insights on mobile proxies and access infrastructure.

Multi-Account Proxies: Setup, Types, Tools & Mistakes (2026)