Firecrawl MCP Server: Complete Setup Guide
The Firecrawl MCP Server gives AI coding assistants like Claude and Cursor the ability to scrape and read any website in real time. Through the Model Context Protocol (MCP), your AI assistant can fetch live web data, extract structured information, and use it directly in conversations — without you leaving your IDE.
This guide covers everything from installation to advanced usage patterns for the Firecrawl MCP server.
What Is MCP?
The Model Context Protocol (MCP) is an open standard developed by Anthropic that allows AI models to interact with external tools and data sources. Think of it as a plugin system for AI assistants.
Instead of the AI being limited to its training data, MCP lets it call external tools — databases, APIs, file systems, and web scrapers — during a conversation.
How Firecrawl MCP Works
You ask Claude/Cursor a question → AI decides to use Firecrawl → MCP server calls Firecrawl API → Web data returned → AI uses data in responseThe AI decides when to scrape a website based on your request. If you ask “What are the current pricing plans for Service X?”, the AI can use Firecrawl to fetch that page live and give you up-to-date information.
Why Use Firecrawl MCP?
| Benefit | Description |
|---|---|
| Live web data | AI can access current website content, not stale training data |
| In-context research | Research without leaving your IDE |
| Clean data | Firecrawl’s markdown output is optimized for LLM consumption |
| Structured extraction | Pull specific data fields from any page |
| Site crawling | Index entire documentation sites during a conversation |
| No manual copy-paste | AI fetches data directly instead of you copying URLs |
Installation
Prerequisites
- Node.js 18+ or Python 3.10+ (depending on the MCP client)
- Firecrawl API key — get one at firecrawl.dev
- An MCP-compatible client — Claude Desktop, Cursor, or another MCP host
Install the Firecrawl MCP Server
Via npm (recommended for most setups):
npm install -g firecrawl-mcpVia pip:
pip install firecrawl-mcpConfiguration for Different Clients
Claude Desktop
Edit your Claude Desktop configuration file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "fc-your-api-key-here"
}
}
}
}Restart Claude Desktop after saving.
Cursor
In Cursor, configure MCP servers through Settings:
- Open Cursor Settings
- Navigate to Features → MCP Servers
- Click Add Server
- Configure:
- Name:
firecrawl - Command:
npx -y firecrawl-mcp - Environment:
FIRECRAWL_API_KEY=fc-your-key
Or edit .cursor/mcp.json in your project:
{
"mcpServers": {
"firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "fc-your-api-key-here"
}
}
}
}Claude Code (CLI)
claude mcp add firecrawl npx -y firecrawl-mcp --env FIRECRAWL_API_KEY=fc-your-keyVS Code with Continue
In .continue/config.json:
{
"mcpServers": [
{
"name": "firecrawl",
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "fc-your-key"
}
}
]
}Available Tools
The Firecrawl MCP server exposes several tools to the AI:
| Tool | Description | Use Case |
|---|---|---|
firecrawl_scrape | Scrape a single URL | Get content from a specific page |
firecrawl_crawl | Crawl multiple pages | Index a documentation site |
firecrawl_map | Discover URLs on a site | Find all pages before targeted scraping |
firecrawl_extract | Extract structured data | Pull specific fields using a schema |
firecrawl_search | Search the web | Find relevant pages by query |
Tool Parameters
firecrawl_scrape:
url(required) — The URL to scrapeformats— Output formats:markdown,html,links,screenshotwaitFor— Wait time for JS rendering (ms)
firecrawl_crawl:
url(required) — Starting URLlimit— Maximum pages to crawlmaxDepth— How deep to follow linksincludePaths/excludePaths— URL pattern filters
firecrawl_map:
url(required) — Site to mapsearch— Filter URLs by keywordlimit— Maximum URLs to return
firecrawl_extract:
url(required) — Page to extract fromschema— JSON schema for extractionprompt— Natural language extraction instruction
Usage Examples
Research a Tool’s Current Features
Prompt: “What features does Vercel’s v0 offer right now? Check their website.”
The AI will:
- Use
firecrawl_scrapeto fetchhttps://v0.dev - Read the markdown content
- Summarize the current features with accurate, live information
Compare Pricing Pages
Prompt: “Compare the pricing of Supabase, PlanetScale, and Neon. Get their actual current prices.”
The AI will:
- Use
firecrawl_scrapeon each pricing page - Extract plan names, prices, and features
- Build a comparison table with real-time data
Research API Documentation
Prompt: “I need to integrate with Stripe’s payment links API. Get me the relevant documentation.”
The AI will:
- Use
firecrawl_mapto discover Stripe’s docs structure - Use
firecrawl_scrapeon the relevant pages - Provide accurate, current API documentation
Extract Structured Data
Prompt: “Get all the Y Combinator W24 batch companies with their descriptions and categories.”
The AI will:
- Use
firecrawl_extractwith a schema defining company fields - Return structured JSON data
- Format the results as requested
Crawl Documentation for Context
Prompt: “Crawl the FastAPI documentation so you understand the latest features, then help me build an API.”
The AI will:
- Use
firecrawl_crawlon the FastAPI docs - Build context from the crawled pages
- Use that context for accurate code generation
Advanced Configuration
Self-Hosted Firecrawl
If you’re running self-hosted Firecrawl, point the MCP server to your instance:
{
"mcpServers": {
"firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "fc-your-key",
"FIRECRAWL_API_URL": "http://localhost:3002"
}
}
}
}This eliminates credit costs while keeping the same MCP interface.
Rate Limiting
The MCP server respects Firecrawl’s rate limits. For intensive workflows:
{
"env": {
"FIRECRAWL_API_KEY": "fc-your-key",
"FIRECRAWL_RETRY_MAX": "3",
"FIRECRAWL_RETRY_DELAY": "5000"
}
}Proxy Configuration
If your self-hosted Firecrawl uses proxies:
{
"env": {
"FIRECRAWL_API_KEY": "fc-your-key",
"FIRECRAWL_API_URL": "http://localhost:3002",
"PROXY_SERVER": "http://proxy:8080"
}
}Best Practices
1. Be Specific with Scrape Requests
Instead of: “Check the Stripe website” Say: “Scrape https://stripe.com/pricing and extract all plan prices”
Specific URLs and clear extraction goals produce better results.
2. Use Map Before Crawl
When you need information from a large site but don’t know the exact URL: “First map https://docs.example.com to find pages about authentication, then scrape those specific pages”
3. Monitor Credit Usage
Firecrawl’s free tier has 500 credits/month. Each MCP tool call uses credits:
scrape= 1 credit per URLcrawl= 1 credit per page discoveredmap= 1 creditextract= 1 credit per URL
4. Combine with Other MCP Servers
Firecrawl MCP works alongside other MCP servers:
- Firecrawl for web data
- PostgreSQL MCP for database access
- GitHub MCP for repository data
- File system MCP for local files
Troubleshooting
“Server not found” Error
- Verify Node.js is installed:
node --version(need 18+) - Test the server manually:
npx firecrawl-mcp --help - Check the config file path is correct
- Restart the MCP client
“Authentication failed” Error
- Verify your API key at firecrawl.dev/dashboard
- Check for typos in the config file
- Ensure the key has
fc-prefix
Server Crashes
Check logs:
- Claude Desktop:
~/Library/Logs/Claude/mcp-server-firecrawl.log - Cursor: Check the Output panel → MCP Servers
Slow Responses
- Firecrawl needs time to render JavaScript — complex pages take 3-10 seconds
- Large crawls can take minutes; consider reducing
limitandmaxDepth - Self-hosted Firecrawl may be faster for your use case
Firecrawl MCP vs Other Web MCP Servers
| Feature | Firecrawl MCP | Browserbase MCP | Jina MCP | Custom HTTP MCP |
|---|---|---|---|---|
| Markdown output | Excellent | Good | Excellent | Raw HTML |
| JS rendering | Yes | Yes | Yes | No |
| Structured extraction | Yes (LLM) | No | No | No |
| Site crawling | Yes | No | No | No |
| URL discovery | Yes (Map) | No | No | No |
| Anti-bot | Advanced | Advanced | Moderate | None |
| Self-host option | Yes | No | No | Yes |
| Cost | Credit-based | Subscription | Free tier | Free |
Frequently Asked Questions
Does the Firecrawl MCP server use my API credits?
Yes. Every tool call made through MCP uses Firecrawl credits just like a direct API call. The free tier includes 500 credits/month. Monitor your usage at the Firecrawl dashboard.
Can I use the MCP server without an internet connection?
No — the MCP server needs to reach the Firecrawl API (or your self-hosted instance) to function. For offline use, consider scraping data in advance and storing it locally.
Is the MCP server secure?
The MCP server runs locally on your machine and communicates with Firecrawl’s API over HTTPS. Your API key is stored in your local configuration file. No data is stored by the MCP server itself.
Can I use Firecrawl MCP in team settings?
Yes. Each team member configures their own MCP client with their own Firecrawl API key. For shared API keys, use self-hosted Firecrawl to avoid credit conflicts.
Which AI assistant works best with Firecrawl MCP?
Claude Desktop and Cursor both work well. Claude tends to be more strategic about when to use the scraping tools, while Cursor integrates the scraped data well into code generation. Both support the full range of Firecrawl MCP tools.
Conclusion
The Firecrawl MCP server transforms your AI assistant from a static knowledge base into a dynamic research tool. By giving Claude or Cursor the ability to scrape any website in real time, you eliminate the gap between the AI’s training data cutoff and the current state of the web.
Setup takes under 5 minutes, and the free tier is generous enough for daily use. Start with the basic scrape tool, then explore crawling and extraction as you discover more use cases.
For more MCP servers to pair with Firecrawl, see our best MCP servers guide. For Firecrawl fundamentals, check our complete Firecrawl guide.
- AI Web Scraper with Python: Build Your Own
- Best AI Web Scrapers 2026: Complete Comparison
- Agentic Browsers Explained: Browserbase, Browser Use, and Proxy Infrastructure
- Agentic Browsers Explained: The Future of AI + Proxies in 2026
- How AI Agents Use Proxies for Real-Time Web Data Collection in 2026
- Mobile Proxies for AI Data Collection: Web Scraping for Training Data
- AI Web Scraper with Python: Build Your Own
- Best AI Web Scrapers 2026: Complete Comparison
- Agentic Browsers Explained: Browserbase, Browser Use, and Proxy Infrastructure
- Agentic Browsers Explained: The Future of AI + Proxies in 2026
- How AI Agents Use Proxies for Real-Time Web Data Collection in 2026
- Mobile Proxies for AI Data Collection: Web Scraping for Training Data
- AI Web Scraper with Python: Build Your Own
- Best AI Web Scrapers 2026: Complete Comparison
- Agentic Browsers Explained: Browserbase, Browser Use, and Proxy Infrastructure
- Agentic Browsers Explained: The Future of AI + Proxies in 2026
- How AI Agents Use Proxies for Real-Time Web Data Collection in 2026
- Mobile Proxies for AI Data Collection: Web Scraping for Training Data
Related Reading
- AI Web Scraper with Python: Build Your Own
- Best AI Web Scrapers 2026: Complete Comparison
- Agentic Browsers Explained: Browserbase, Browser Use, and Proxy Infrastructure
- Agentic Browsers Explained: The Future of AI + Proxies in 2026
- How AI Agents Use Proxies for Real-Time Web Data Collection in 2026
- Mobile Proxies for AI Data Collection: Web Scraping for Training Data