Make.com (still called Integromat by a lot of practitioners) sits in an interesting spot for web scraping with Make.com in 2026: it’s visual, low-code, and surprisingly capable when you push past the obvious “HTTP > Parse JSON” happy path. If you’ve already read the n8n HTTP and Playwright workflow guide or the Zapier webhooks + code steps breakdown, Make fits roughly in the same tier — but with a few module-level tricks that make it better than it looks on the surface.
What Make.com Actually Does Well for Scraping
Make’s HTTP module is the core tool. It supports custom headers, raw body payloads, multipart form data, cookie passthrough, and response parsing in a single step. For simple structured endpoints — public JSON APIs, paginated REST feeds, RSS-backed product catalogs — you can build a working scraper in under 10 minutes with no code.
The iterator + aggregator pattern is where Make earns its keep. You fetch a list endpoint, iterate over items with the Iterator module, run a sub-request per item in parallel (or sequentially via a queue), then aggregate results into a Google Sheet or webhook. This maps cleanly to the kind of enrichment pipelines data teams actually run at scale.
Where Make falls short: it has no native browser rendering. If the target site runs heavy JavaScript, Make alone cannot parse what a headless browser would see. You need an external render layer — Browserless, ScrapingBee, or a self-hosted Playwright API — and proxy that render endpoint through Make’s HTTP module. That adds latency and cost, but it works.
HTTP Module Configuration Tricks
The default HTTP module config misses several things that anti-bot systems check immediately.
Headers to always set manually:
User-Agent: use a real Chrome UA string, not the Make defaultAccept-Language:en-US,en;q=0.9Accept-Encoding:gzip, deflate, brSec-Fetch-Mode:navigate(for page-level requests)Referer: plausible origin URL
GET /products?page=2 HTTP/1.1
Host: example.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36
Accept-Language: en-US,en;q=0.9
Accept-Encoding: gzip, deflate, br
Referer: https://example.com/productsThis won’t fool TLS fingerprinting — Make sends HTTP/1.1 by default and has a fixed JA3 fingerprint you cannot override from inside the platform. For targets that fingerprint at the TLS layer, read HTTP/2 fingerprinting and why headers alone aren’t enough before going further. The short version: route through a residential proxy service that handles TLS termination so the fingerprint belongs to the proxy, not Make’s infrastructure.
Proxy configuration lives under HTTP > Advanced Settings > Proxy. Make supports HTTP and SOCKS5 proxies. Use sticky sessions (same IP per scenario run) for sites that tie session cookies to IP. For rotating proxies, set the proxy at the scenario level and let Make randomize per execution.
Pagination and Rate Limiting
Most production scraping jobs involve pagination. Make handles this two ways:
- Loop with a counter: Initialize a counter variable at 1, fetch
?page={{counter}}, parse results, increment, use a router to break when results < expected page size. - Cursor/token pagination: Extract the
next_cursorfrom the response body, feed it into the next HTTP call via a variable, repeat until cursor is null.
For rate limiting, Make has no native per-request delay below the scenario scheduling level. The workaround is a Sleep module (set to 1-3 seconds between iterator cycles) or splitting requests across multiple scheduled scenarios staggered by time. Heavy scraping jobs hitting aggressive rate limits benefit from a queue architecture — write URLs to a datastore, run a separate scenario that pops and processes one URL at a time.
Make vs. Alternatives: Where It Fits
| Platform | JS Rendering | Code Steps | Proxy Config | Scheduling | Free Tier |
|---|---|---|---|---|---|
| Make.com | External only | Limited (JS eval) | Per-module | Cron + webhook | 1,000 ops/mo |
| n8n (self-hosted) | Via nodes | Full Node.js | Per-node | Cron + trigger | Unlimited |
| Zapier | External only | Python/JS | Not native | Trigger-based | 100 tasks/mo |
| Pipedream | Via steps | Full Node.js | Per-step | Cron + event | 10,000 credits/mo |
| Activepieces | External only | JS sandboxed | Not native | Cron + trigger | Self-host only |
Make’s operations-based pricing (not task-based) means a single scenario run consuming 50 module operations costs 50 ops against your monthly limit. Complex scraping pipelines with iterators burn ops fast. If your scraper runs 500 URLs a day through a 6-module chain, that’s 3,000 ops/day — you’ll need at least the Core plan.
For self-hosted flexibility and full code access, Pipedream’s source/action model or Activepieces OSS are worth comparing before committing to Make’s pricing tier.
Error Handling and Retry Logic
Make has a built-in error handler (the small wrench icon on any module). For scraping, configure it as follows:
- Resume: skip the failed item and continue the iterator — good for bulk jobs where a few 404s are expected
- Retry: retry the same module up to 3 times with a configurable delay — good for transient 429s
- Break: stop the scenario and log the error — good for auth failures where retrying makes no sense
A useful pattern: route all HTTP responses through a router that checks {{statusCode}}. 200 goes to parsing. 429 writes the URL to a retry datastore. 403/401 fires a webhook alert. 5xx retries twice, then breaks.
For captcha responses (returning 200 with a challenge page rather than an error code), you need to check response body length or a known DOM marker in the parsed HTML. Make’s text parser module can extract a string match — if the string “cf-challenge” appears in the body, treat it as a failure and rotate proxy.
Bottom Line
Make.com is a solid middle-ground scraping platform if your targets serve structured data or you’re willing to proxy render through an external service. It’s not the right tool for heavy JavaScript-rendered sites or scenarios where TLS fingerprint evasion is required at scale — for those, a code-first platform wins on flexibility and cost. DRT covers the full automation scraping landscape across platforms, and if you’re evaluating which no-code or low-code tool fits your pipeline, the comparison table above should give you a clear starting point.
Related guides on dataresearchtools.com
- Web Scraping with n8n in 2026: HTTP + Playwright Workflow Patterns
- Web Scraping with Zapier in 2026: Webhooks + Code Steps
- Web Scraping with Pipedream in 2026: Source/Action Patterns
- Web Scraping with Activepieces (OSS) in 2026: Workflow Patterns
- Pillar: HTTP/2 Fingerprinting for Web Scraping: Why Headers Are Not Enough