What You’ll Learn
This guide shows you how to integrate Firecrawl with Junis using the Firecrawl MCP Server. Your agents will be able to:- Scrape single web pages or batch process multiple URLs
- Crawl entire websites recursively
- Search the web and extract content from results
- Extract structured data using JSON schemas
- Map website structures and discover all pages
Prerequisites:
- Firecrawl account and API key (Get one here)
- Admin role in Junis (for organization-level setup)
- Basic understanding of web scraping concepts
Quick Setup (5 Minutes)
1
Get Firecrawl API Key
- Visit Firecrawl Dashboard
- Sign up or log in
- Click “Create API Key”
- Copy your API key (starts with
fc-)
2
Add Firecrawl Platform to Junis
Navigate to Team > MCP Skills in Junis and find the Firecrawl card.If Firecrawl is already configured (globe icon 🌍), skip to Step 3.Otherwise, click “Connect” and fill in:
- Platform Name:
Firecrawl - MCP Server URL:
https://api.firecrawl.dev/mcp/ - Transport Type:
Streamable HTTP
3
Add Your Credentials
Click “Add Auth” on the Firecrawl card and paste your API key.Click “Test Connection” to verify it works.
4
Enable for Your Agents
Go to Admin > Agents, edit the agent you want to connect, and check “Firecrawl” in the MCP Platforms section.Save and test by asking: “Scrape the homepage of https://example.com”
Available Tools (8 Tools)
Firecrawl MCP provides 8 powerful web research tools:Content Extraction
firecrawl_scrape
Single Page ScrapingExtract content from a single URL in markdown or HTML format.Example: “Scrape the content of https://blog.example.com/post”
firecrawl_batch_scrape
Batch ScrapingScrape multiple URLs simultaneously.Example: “Scrape these 5 product pages: URL1, URL2, …”
Site Discovery
firecrawl_map
Website MappingDiscover all indexed URLs on a website.Example: “Map all pages on example.com”
firecrawl_search
Web Search + ExtractSearch the web and extract content from top results.Example: “Search for ‘AI trends 2024’ and summarize top 5 results”
Advanced Operations
firecrawl_crawl
Recursive CrawlingCrawl an entire website recursively.Example: “Crawl all product pages on example.com”
firecrawl_extract
Structured Data ExtractionExtract data using JSON schemas for consistent formatting.Example: “Extract price, stock, and reviews from this product page”
Status Monitoring
firecrawl_check_batch_status
Batch StatusCheck progress of batch scraping jobs.
firecrawl_check_crawl_status
Crawl StatusMonitor progress of crawling operations.
Common Workflows
1. Single Page Analysis
Use Case: Analyze a blog post or article Workflow:2. Competitor Research
Use Case: Analyze multiple competitor websites Workflow:3. Content Discovery
Use Case: Find all blog posts on a website Workflow:4. Structured Data Collection
Use Case: Collect product information from e-commerce sites Workflow:Tool Parameters Explained
firecrawl_scrape
| Parameter | Type | Description | Example |
|---|---|---|---|
url | string | Required. URL to scrape | https://example.com |
formats | array | Output format(s) | ["markdown", "html"] |
onlyMainContent | boolean | Extract only main content | true |
includeTags | array | HTML tags to include | ["article", "main"] |
excludeTags | array | HTML tags to exclude | ["nav", "footer"] |
firecrawl_batch_scrape
| Parameter | Type | Description | Example |
|---|---|---|---|
urls | array | Required. List of URLs | ["url1", "url2", "url3"] |
options | object | Scraping options | Same as firecrawl_scrape |
firecrawl_search
| Parameter | Type | Description | Example |
|---|---|---|---|
query | string | Required. Search query | "AI trends 2024" |
limit | number | Max results to return | 5 |
lang | string | Language code | "en" or "ko" |
country | string | Country code | "US" or "KR" |
firecrawl_crawl
| Parameter | Type | Description | Example |
|---|---|---|---|
url | string | Required. Starting URL | https://example.com |
maxDepth | number | Max crawl depth | 3 |
limit | number | Max pages to crawl | 100 |
allowExternalLinks | boolean | Follow external links | false |
includePaths | array | URL patterns to include | ["/blog/*"] |
excludePaths | array | URL patterns to exclude | ["/admin/*"] |
firecrawl_extract
| Parameter | Type | Description | Example |
|---|---|---|---|
urls | array | Required. URLs to extract from | ["url1", "url2"] |
prompt | string | Extraction instructions | "Extract product details" |
schema | object | JSON Schema for data | {"name": "string", "price": "number"} |
enableWebSearch | boolean | Use web search | false |
Example Use Cases
Web Research Agent
Agent Prompt:Content Monitoring Agent
Agent Prompt:E-commerce Data Collector
Agent Prompt:Troubleshooting
Error: API Key Invalid
Error: API Key Invalid
Symptom: Connection fails with authentication errorCause: Invalid or expired Firecrawl API keySolution:
- Check your API key at https://www.firecrawl.dev/app/api-keys
- Generate a new key if needed
- Update credentials in Junis (Team > MCP Skills > Firecrawl)
Error: Credit Limit Exceeded
Error: Credit Limit Exceeded
Symptom: Scraping operations fail after many requestsCause: Firecrawl account has run out of creditsSolution:
- Check your usage at https://www.firecrawl.dev/app
- Purchase additional credits or upgrade plan
- Implement rate limiting in your agent logic
Scraping Takes Too Long
Scraping Takes Too Long
Symptom: firecrawl_crawl or batch_scrape timeoutCause: Large websites or many URLsSolution:
- Reduce
maxDepthorlimitparameters - Use
includePathsto filter specific sections - Check status periodically with check_crawl_status
- Break large jobs into smaller batches
Extracted Data is Incomplete
Extracted Data is Incomplete
Symptom: firecrawl_extract misses some fieldsCause: JSON schema doesn’t match page structureSolution:
- Test with
firecrawl_scrapefirst to see raw content - Refine your JSON schema based on actual page structure
- Use more descriptive
promptto guide extraction - Try extracting fewer fields per request
Tools Not Loading
Tools Not Loading
Symptom: Agent connected but Firecrawl tools don’t appearSolution:
- Verify connection: Team > MCP Skills > Firecrawl > Test Connection
- Check agent configuration: Admin > Agents > [Your Agent] > MCP Platforms
- Restart agent (edit and save without changes)
- Check logs: Admin > Dashboard > Recent Activity
Performance Tips
✅ Optimization Best Practices:
- Use
onlyMainContent: trueto reduce noise and speed up scraping - Filter URLs with
includePathsandexcludePathsbefore crawling - Batch process multiple URLs instead of sequential single scrapes
- Cache scraped content to avoid redundant API calls
- Set reasonable
limitandmaxDepthfor crawling operations - Use
firecrawl_mapfirst to plan your scraping strategy
Rate Limits & Costs
Firecrawl Pricing Tiers
| Tier | Credits/Month | Best For |
|---|---|---|
| Free | 500 credits | Testing and small projects |
| Starter | 10,000 credits | Regular scraping tasks |
| Growth | 50,000 credits | Medium-scale operations |
| Enterprise | Custom | Large-scale data collection |
Credit Usage
- Scrape: 1 credit per page
- Batch Scrape: 1 credit per page
- Map: 5 credits per site
- Search: 2 credits per query
- Crawl: 1 credit per page discovered
- Extract: 2 credits per page
Advanced Configuration
Custom Scraping Options
Crawling Strategy
For large websites, use a phased approach: Phase 1: DiscoveryWhat’s Next?
Notion MCP
Store scraped data in Notion databases
PostgreSQL MCP
Store structured data in databases
Advanced Workflows
Combine multiple MCP platforms
MCP Overview
Back to MCP Integration overview
Additional Resources
- Firecrawl Dashboard: https://www.firecrawl.dev/app
- API Documentation: https://docs.firecrawl.dev/
- MCP Server GitHub: https://github.com/firecrawl/firecrawl-mcp-server
- MCP Protocol: https://modelcontextprotocol.io/
Pro Tip: Combine Firecrawl with Notion MCP to automatically save research findings, or with Slack MCP to get real-time alerts when monitored websites change.
