Axios Retry Logic for Web Scraping: Handle Failures Gracefully

TL;DR
axios-retry and custom interceptor patterns let you build resilient scraping pipelines in Node.js. this guide covers retry strategies, exponential backoff, error classification, and proxy rotation integrated into the retry loop.

most web scraping tutorials show you how to make a single successful request. production scraping is almost entirely about handling failures — network timeouts, rate limits, proxy bans, and transient server errors. in Node.js, Axios combined with smart retry logic handles the majority of these failure modes without manual intervention.

this guide covers practical retry patterns for scraping workloads, from simple axios-retry setup to custom interceptors with proxy rotation.

why requests fail in scraping contexts

scraping requests fail for different reasons that require different handling strategies. a 429 (rate limited) needs a backoff wait. a 407 (proxy auth failed) needs a new proxy. a 503 (server overloaded) may succeed on immediate retry. a 403 (blocked) usually needs both a new proxy and a header rotation. conflating these into a single “retry everything” strategy wastes time and burns through proxy quota unnecessarily.

classify your errors before building retry logic. the classification shapes everything downstream.

basic axios-retry setup

const axios = require('axios');
const axiosRetry = require('axios-retry').default;

const client = axios.create({
  timeout: 15000,
  headers: {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'
  }
});

axiosRetry(client, {
  retries: 3,
  retryDelay: axiosRetry.exponentialDelay,
  retryCondition: (error) => {
    // retry on network errors and 5xx responses
    return axiosRetry.isNetworkError(error) || axiosRetry.isRetryableError(error);
  }
});

// usage
async function fetchPage(url) {
  const response = await client.get(url);
  return response.data;
}

the exponentialDelay function adds jitter automatically in recent versions of axios-retry, which prevents thundering herd problems when many concurrent workers retry simultaneously.

custom retry conditions for scraping

axios-retry’s default isRetryableError covers network errors and 5xx responses. for scraping, you also need to handle 429 rate limits and some 4xx responses that represent transient blocks rather than permanent rejections.

axiosRetry(client, {
  retries: 5,
  retryDelay: (retryCount, error) => {
    if (error.response && error.response.status === 429) {
      // respect Retry-After header if present
      const retryAfter = error.response.headers['retry-after'];
      if (retryAfter) {
        return parseInt(retryAfter) * 1000;
      }
      return Math.pow(2, retryCount) * 1000; // exponential fallback
    }
    return axiosRetry.exponentialDelay(retryCount);
  },
  retryCondition: (error) => {
    if (!error.response) return true; // network error, always retry
    const status = error.response.status;
    // retry on: rate limits, server errors, some gateway errors
    return [429, 500, 502, 503, 504].includes(status);
  }
});

proxy rotation in the retry interceptor

integrating proxy rotation with retry logic is one of the most useful patterns in Node.js scraping. when a request fails due to a proxy-related error (407, connection refused, proxy timeout), the interceptor assigns a new proxy for the retry attempt.

const HttpsProxyAgent = require('https-proxy-agent');

const proxyPool = [
  'http://user:pass@proxy1.example.com:8080',
  'http://user:pass@proxy2.example.com:8080',
  'http://user:pass@proxy3.example.com:8080'
];

let proxyIndex = 0;

function getNextProxy() {
  const proxy = proxyPool[proxyIndex % proxyPool.length];
  proxyIndex++;
  return proxy;
}

// request interceptor: assign proxy before each attempt
client.interceptors.request.use((config) => {
  const proxyUrl = getNextProxy();
  config.httpsAgent = new HttpsProxyAgent(proxyUrl);
  config.httpAgent = new HttpsProxyAgent(proxyUrl);
  return config;
});

// response interceptor: log failures for monitoring
client.interceptors.response.use(
  (response) => response,
  (error) => {
    const status = error.response ? error.response.status : 'network';
    console.error(`request failed: ${error.config.url} — status: ${status}`);
    return Promise.reject(error);
  }
);

understand the difference between SOCKS5 and HTTP proxies when selecting your proxy agent library. SOCKS5 proxies require a different agent (socks-proxy-agent) and handle different traffic types.

handling timeout failures separately

timeouts need special handling because they can indicate either a slow proxy or a slow target server. differentiate between connection timeouts (proxy problem) and read timeouts (server or content problem) and apply different retry strategies.

const client = axios.create({
  timeout: 20000  // combined timeout
});

// or use separate connect/read timeouts via a custom adapter
// axios does not natively separate these, but you can detect
// which phase failed from the error code
client.interceptors.response.use(null, (error) => {
  if (error.code === 'ECONNABORTED') {
    // timeout — try with a different proxy
    console.log('timeout, will retry with new proxy');
  }
  if (error.code === 'ECONNREFUSED' || error.code === 'ECONNRESET') {
    // proxy connection issue
    console.log('proxy connection failed, rotating');
  }
  return Promise.reject(error);
});

circuit breaker pattern for failing domains

for large-scale scrapers hitting many domains, implement a circuit breaker: track failure rates per domain and stop sending requests to domains that are consistently failing. this prevents wasting proxy quota on permanently blocked targets while the rest of the scrape continues.

the opossum library implements circuit breakers for Node.js and integrates cleanly with axios. set a failure threshold of 50% over a 30-second window and an open-circuit timeout of 60 seconds for most scraping targets.

monitoring and alerting

log retry events with structured JSON including: URL, attempt number, status code, proxy used, and timestamp. aggregate these logs to detect when a target site has changed its anti-bot strategy. a sudden spike in 403 retries across many URLs indicates a new detection pattern that needs investigation, not just more retries.

fewer retries start with better proxies. our Singapore mobile proxy uses real 4G/5G carrier IPs that reduce blocks and timeouts from the start.

learn about the full web scraping pipeline including request infrastructure, parsing, and storage to understand where retry logic fits in the overall architecture.

related guides

sources and further reading

last updated: April 3, 2026

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top

Resources

Proxy Signals Podcast
Operator-level insights on mobile proxies and access infrastructure.

Multi-Account Proxies: Setup, Types, Tools & Mistakes (2026)