Firecrawl MCP Server: Complete Setup Guide

Firecrawl MCP Server: Complete Setup Guide

The Firecrawl MCP Server gives AI coding assistants like Claude and Cursor the ability to scrape and read any website in real time. Through the Model Context Protocol (MCP), your AI assistant can fetch live web data, extract structured information, and use it directly in conversations — without you leaving your IDE.

This guide covers everything from installation to advanced usage patterns for the Firecrawl MCP server.

What Is MCP?

The Model Context Protocol (MCP) is an open standard developed by Anthropic that allows AI models to interact with external tools and data sources. Think of it as a plugin system for AI assistants.

Instead of the AI being limited to its training data, MCP lets it call external tools — databases, APIs, file systems, and web scrapers — during a conversation.

How Firecrawl MCP Works

You ask Claude/Cursor a question → AI decides to use Firecrawl → MCP server calls Firecrawl API → Web data returned → AI uses data in response

The AI decides when to scrape a website based on your request. If you ask “What are the current pricing plans for Service X?”, the AI can use Firecrawl to fetch that page live and give you up-to-date information.

Why Use Firecrawl MCP?

BenefitDescription
Live web dataAI can access current website content, not stale training data
In-context researchResearch without leaving your IDE
Clean dataFirecrawl’s markdown output is optimized for LLM consumption
Structured extractionPull specific data fields from any page
Site crawlingIndex entire documentation sites during a conversation
No manual copy-pasteAI fetches data directly instead of you copying URLs

Installation

Prerequisites

  • Node.js 18+ or Python 3.10+ (depending on the MCP client)
  • Firecrawl API key — get one at firecrawl.dev
  • An MCP-compatible client — Claude Desktop, Cursor, or another MCP host

Install the Firecrawl MCP Server

Via npm (recommended for most setups):

npm install -g firecrawl-mcp

Via pip:

pip install firecrawl-mcp

Configuration for Different Clients

Claude Desktop

Edit your Claude Desktop configuration file:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json

Windows: %APPDATA%\Claude\claude_desktop_config.json

{
  "mcpServers": {
    "firecrawl": {
      "command": "npx",
      "args": ["-y", "firecrawl-mcp"],
      "env": {
        "FIRECRAWL_API_KEY": "fc-your-api-key-here"
      }
    }
  }
}

Restart Claude Desktop after saving.

Cursor

In Cursor, configure MCP servers through Settings:

  1. Open Cursor Settings
  2. Navigate to FeaturesMCP Servers
  3. Click Add Server
  4. Configure:
  • Name: firecrawl
  • Command: npx -y firecrawl-mcp
  • Environment: FIRECRAWL_API_KEY=fc-your-key

Or edit .cursor/mcp.json in your project:

{
  "mcpServers": {
    "firecrawl": {
      "command": "npx",
      "args": ["-y", "firecrawl-mcp"],
      "env": {
        "FIRECRAWL_API_KEY": "fc-your-api-key-here"
      }
    }
  }
}

Claude Code (CLI)

claude mcp add firecrawl npx -y firecrawl-mcp --env FIRECRAWL_API_KEY=fc-your-key

VS Code with Continue

In .continue/config.json:

{
  "mcpServers": [
    {
      "name": "firecrawl",
      "command": "npx",
      "args": ["-y", "firecrawl-mcp"],
      "env": {
        "FIRECRAWL_API_KEY": "fc-your-key"
      }
    }
  ]
}

Available Tools

The Firecrawl MCP server exposes several tools to the AI:

ToolDescriptionUse Case
firecrawl_scrapeScrape a single URLGet content from a specific page
firecrawl_crawlCrawl multiple pagesIndex a documentation site
firecrawl_mapDiscover URLs on a siteFind all pages before targeted scraping
firecrawl_extractExtract structured dataPull specific fields using a schema
firecrawl_searchSearch the webFind relevant pages by query

Tool Parameters

firecrawl_scrape:

  • url (required) — The URL to scrape
  • formats — Output formats: markdown, html, links, screenshot
  • waitFor — Wait time for JS rendering (ms)

firecrawl_crawl:

  • url (required) — Starting URL
  • limit — Maximum pages to crawl
  • maxDepth — How deep to follow links
  • includePaths / excludePaths — URL pattern filters

firecrawl_map:

  • url (required) — Site to map
  • search — Filter URLs by keyword
  • limit — Maximum URLs to return

firecrawl_extract:

  • url (required) — Page to extract from
  • schema — JSON schema for extraction
  • prompt — Natural language extraction instruction

Usage Examples

Research a Tool’s Current Features

Prompt: “What features does Vercel’s v0 offer right now? Check their website.”

The AI will:

  1. Use firecrawl_scrape to fetch https://v0.dev
  2. Read the markdown content
  3. Summarize the current features with accurate, live information

Compare Pricing Pages

Prompt: “Compare the pricing of Supabase, PlanetScale, and Neon. Get their actual current prices.”

The AI will:

  1. Use firecrawl_scrape on each pricing page
  2. Extract plan names, prices, and features
  3. Build a comparison table with real-time data

Research API Documentation

Prompt: “I need to integrate with Stripe’s payment links API. Get me the relevant documentation.”

The AI will:

  1. Use firecrawl_map to discover Stripe’s docs structure
  2. Use firecrawl_scrape on the relevant pages
  3. Provide accurate, current API documentation

Extract Structured Data

Prompt: “Get all the Y Combinator W24 batch companies with their descriptions and categories.”

The AI will:

  1. Use firecrawl_extract with a schema defining company fields
  2. Return structured JSON data
  3. Format the results as requested

Crawl Documentation for Context

Prompt: “Crawl the FastAPI documentation so you understand the latest features, then help me build an API.”

The AI will:

  1. Use firecrawl_crawl on the FastAPI docs
  2. Build context from the crawled pages
  3. Use that context for accurate code generation

Advanced Configuration

Self-Hosted Firecrawl

If you’re running self-hosted Firecrawl, point the MCP server to your instance:

{
  "mcpServers": {
    "firecrawl": {
      "command": "npx",
      "args": ["-y", "firecrawl-mcp"],
      "env": {
        "FIRECRAWL_API_KEY": "fc-your-key",
        "FIRECRAWL_API_URL": "http://localhost:3002"
      }
    }
  }
}

This eliminates credit costs while keeping the same MCP interface.

Rate Limiting

The MCP server respects Firecrawl’s rate limits. For intensive workflows:

{
  "env": {
    "FIRECRAWL_API_KEY": "fc-your-key",
    "FIRECRAWL_RETRY_MAX": "3",
    "FIRECRAWL_RETRY_DELAY": "5000"
  }
}

Proxy Configuration

If your self-hosted Firecrawl uses proxies:

{
  "env": {
    "FIRECRAWL_API_KEY": "fc-your-key",
    "FIRECRAWL_API_URL": "http://localhost:3002",
    "PROXY_SERVER": "http://proxy:8080"
  }
}

Best Practices

1. Be Specific with Scrape Requests

Instead of: “Check the Stripe website” Say: “Scrape https://stripe.com/pricing and extract all plan prices”

Specific URLs and clear extraction goals produce better results.

2. Use Map Before Crawl

When you need information from a large site but don’t know the exact URL: “First map https://docs.example.com to find pages about authentication, then scrape those specific pages”

3. Monitor Credit Usage

Firecrawl’s free tier has 500 credits/month. Each MCP tool call uses credits:

  • scrape = 1 credit per URL
  • crawl = 1 credit per page discovered
  • map = 1 credit
  • extract = 1 credit per URL

4. Combine with Other MCP Servers

Firecrawl MCP works alongside other MCP servers:

  • Firecrawl for web data
  • PostgreSQL MCP for database access
  • GitHub MCP for repository data
  • File system MCP for local files

Troubleshooting

“Server not found” Error

  1. Verify Node.js is installed: node --version (need 18+)
  2. Test the server manually: npx firecrawl-mcp --help
  3. Check the config file path is correct
  4. Restart the MCP client

“Authentication failed” Error

  1. Verify your API key at firecrawl.dev/dashboard
  2. Check for typos in the config file
  3. Ensure the key has fc- prefix

Server Crashes

Check logs:

  • Claude Desktop: ~/Library/Logs/Claude/mcp-server-firecrawl.log
  • Cursor: Check the Output panel → MCP Servers

Slow Responses

  • Firecrawl needs time to render JavaScript — complex pages take 3-10 seconds
  • Large crawls can take minutes; consider reducing limit and maxDepth
  • Self-hosted Firecrawl may be faster for your use case

Firecrawl MCP vs Other Web MCP Servers

FeatureFirecrawl MCPBrowserbase MCPJina MCPCustom HTTP MCP
Markdown outputExcellentGoodExcellentRaw HTML
JS renderingYesYesYesNo
Structured extractionYes (LLM)NoNoNo
Site crawlingYesNoNoNo
URL discoveryYes (Map)NoNoNo
Anti-botAdvancedAdvancedModerateNone
Self-host optionYesNoNoYes
CostCredit-basedSubscriptionFree tierFree

Frequently Asked Questions

Does the Firecrawl MCP server use my API credits?

Yes. Every tool call made through MCP uses Firecrawl credits just like a direct API call. The free tier includes 500 credits/month. Monitor your usage at the Firecrawl dashboard.

Can I use the MCP server without an internet connection?

No — the MCP server needs to reach the Firecrawl API (or your self-hosted instance) to function. For offline use, consider scraping data in advance and storing it locally.

Is the MCP server secure?

The MCP server runs locally on your machine and communicates with Firecrawl’s API over HTTPS. Your API key is stored in your local configuration file. No data is stored by the MCP server itself.

Can I use Firecrawl MCP in team settings?

Yes. Each team member configures their own MCP client with their own Firecrawl API key. For shared API keys, use self-hosted Firecrawl to avoid credit conflicts.

Which AI assistant works best with Firecrawl MCP?

Claude Desktop and Cursor both work well. Claude tends to be more strategic about when to use the scraping tools, while Cursor integrates the scraped data well into code generation. Both support the full range of Firecrawl MCP tools.

Conclusion

The Firecrawl MCP server transforms your AI assistant from a static knowledge base into a dynamic research tool. By giving Claude or Cursor the ability to scrape any website in real time, you eliminate the gap between the AI’s training data cutoff and the current state of the web.

Setup takes under 5 minutes, and the free tier is generous enough for daily use. Start with the basic scrape tool, then explore crawling and extraction as you discover more use cases.

For more MCP servers to pair with Firecrawl, see our best MCP servers guide. For Firecrawl fundamentals, check our complete Firecrawl guide.


Related Reading

Scroll to Top