MCP Servers

A collection of Model Context Protocol servers, templates, tools and more.

E
Ez Deep Research MCP

Grounded deep research MCP built for Claude Code {BYOK}

Created 1/10/2026
Updated about 22 hours ago
Repository documentation and setup instructions

EZscape Ventures

       ·    ˚    ✦    ███████╗███████╗ ███████╗ ██████╗ █████╗ ██████╗ ███████╗    ✦    ˚    ·
    ˚    ✦    ·    ˚  ██╔════╝╚══███╔╝ ██╔════╝██╔════╝██╔══██╗██╔══██╗██╔════╝  ˚    ·    ✦    ˚
  ·    ˚    ✦    ·    █████╗    ███╔╝  ███████╗██║     ███████║██████╔╝█████╗    ·    ✦    ˚    ·
    ✦    ·    ˚    ·  ██╔══╝   ███╔╝   ╚════██║██║     ██╔══██║██╔═══╝ ██╔══╝  ·    ˚    ·    ✦
  ˚    ·    ✦    ·    ███████╗███████╗ ███████║╚██████╗██║  ██║██║     ███████╗    ·    ✦    ·    ˚
    ·    ✦    ˚    ·  ╚══════╝╚══════╝ ╚══════╝ ╚═════╝╚═╝  ╚═╝╚═╝     ╚══════╝  ·    ˚    ✦    ·
                ✦    ˚    ·    .         .    ·    ˚    ✦    ·    ˚    ✦    .
                 ·    ˚    ✦    ·    ✦    ˚    ·    ✦    ˚    ·    ˚    ✦
   ·    ˚    ✦    ██╗   ██╗███████╗███╗   ██╗████████╗██╗   ██╗██████╗ ███████╗███████╗    ✦    ˚    ·
 ✦    ·    ˚    · ██║   ██║██╔════╝████╗  ██║╚══██╔══╝██║   ██║██╔══██╗██╔════╝██╔════╝ ·    ˚    ·    ✦
   ˚    ✦    ·    ██║   ██║█████╗  ██╔██╗ ██║   ██║   ██║   ██║██████╔╝█████╗  ███████╗    ·    ✦    ˚
 ·    ˚    ·    ✦ ╚██╗ ██╔╝██╔══╝  ██║╚██╗██║   ██║   ██║   ██║██╔══██╗██╔══╝  ╚════██║ ✦    ·    ˚    ·
   ✦    ·    ˚     ╚████╔╝ ███████╗██║ ╚████║   ██║   ╚██████╔╝██║  ██║███████╗███████║     ˚    ·    ✦
 ˚    ·    ✦    ·   ╚═══╝  ╚══════╝╚═╝  ╚═══╝   ╚═╝    ╚═════╝ ╚═╝  ╚═╝╚══════╝╚══════╝   ·    ✦    ·    ˚

ez-deep-research MCP

MCP server for deep research using Google's Gemini, Vertex or OpenRouter's APIs with real-time web search grounding. Features iterative research, automatic source quality scoring, and PDF export.

Features

  • Google Search Grounding: Real-time web search with automatically cited sources
  • Iterative Research: Multi-pass research with intelligent follow-up questions
  • Tranco-Based Quality Scoring: Automatic domain authority scoring using the Tranco top 1M list
  • Recency Weighting: Recent sources boosted, stale content penalized
  • Multiple Backends: Gemini API (free tier), Vertex AI (enterprise), or OpenRouter (any model)
  • PDF Export: Professional reports with proper formatting
  • Cost Tracking: Per-request token tracking with USD cost calculation
  • MCP Protocol: Works with Claude Code, Cursor, and any MCP client

Quick Start

1. Clone and Install

git clone https://github.com/ezoosk/ez-deep-research-mcp.git
cd ez-deep-research-mcp
npm install

2. Configure

cp .env.example .env.local

Edit .env.local with your API key:

# Easiest option - free tier available
BACKEND=gemini
GEMINI_API_KEY=your-key-from-aistudio.google.com

3. Build

npm run build

4. Add to Claude Code

Add to your ~/.claude/mcp.json:

{
  "mcpServers": {
    "ez-deep-research": {
      "command": "node",
      "args": ["--env-file=.env.local", "/path/to/ez-deep-research-mcp/dist/index.js"]
    }
  }
}

Backend Options

Gemini API (Recommended)

Best for: Most users, free tier available, includes Google Search grounding

BACKEND=gemini
GEMINI_API_KEY=your-key
GEMINI_MODEL=gemini-2.5-flash

Get your key at: https://aistudio.google.com/apikey

Pros:

  • Free tier (limited requests/day)
  • Google Search grounding built-in
  • High-quality cited sources

Vertex AI

Best for: Enterprise, high-volume usage, Google Cloud users

BACKEND=vertex
VERTEX_PROJECT_ID=your-project-id
VERTEX_LOCATION=us-central1

Also set GOOGLE_APPLICATION_CREDENTIALS to your service account JSON path.

Pros:

  • Higher rate limits
  • Google Search grounding built-in

OpenRouter

Best for: Using different models (GPT-4, Claude, Llama, Perplexity, etc.)

BACKEND=openrouter
OPENROUTER_API_KEY=your-key
OPENROUTER_MODEL=google/gemini-2.5-flash

Get your key at: https://openrouter.ai/keys

Smart Grounding by Model:

  • Gemini models: Native Google Search tool
  • Perplexity models: Built-in web search
  • All other models: :online suffix (Exa.ai web search)

Default Model & Per-Tool Override

Your configured BACKEND and model (e.g., GEMINI_MODEL) apply to all tools by default. You don't need to specify them on each call.

To use a different backend for a specific tool call, pass the backend parameter:

// Use OpenRouter just for this one call
{ query: "...", backend: "openrouter" }

This lets you mix backends - for example, use Vertex AI for most research but switch to OpenRouter for specific queries.

MCP Tools

quick_research

Single-pass research for quick answers. Saves to ./dev-docs/research/quick/.

{
  query: string,        // Required: The research query
  goal?: string,        // Research goal/brief
  output_dir?: string,  // Custom output directory
  skip_save?: boolean,  // Skip file saving (default: false)
  backend?: string      // Override default backend
}

research

Deep research with iterative follow-up questions. Saves to ./dev-docs/research/standard/.

{
  query: string,        // Required: The research query
  depth?: 1-5,          // How deep to go (default: 2)
  breadth?: 1-5,        // Follow-up questions per pass (default: 2)
  goal?: string,        // Research goal/brief
  existingLearnings?: string[],  // Previous learnings to build on
  output_dir?: string,  // Custom output directory
  skip_save?: boolean,  // Skip file saving (default: false)
  backend?: string      // Override default backend
}

deep_research

Full pipeline: research -> synthesize -> save -> PDF. Saves to ./dev-docs/research/deep/.

{
  query: string,        // Required: The research query
  depth?: 1-5,          // Research depth (default: 3)
  breadth?: 1-5,        // Research breadth (default: 2)
  goal?: string,        // Research goal
  output_dir?: string,  // Custom output directory
  output_pdf?: string,  // Custom PDF path (auto-generated by default)
  skip_save?: boolean,  // Skip file saving (default: false)
  backend?: string      // Override default backend
}

synthesize

Second-stage processing: cross-references findings and creates structured report.

{
  query: string,        // Original research query
  raw_research: string, // Raw research output to synthesize
  sources: Source[],    // Source list from research
  goal?: string,        // Research goal for context
  backend?: string      // Override default backend
}

export_pdf

Export markdown to PDF.

{
  markdown: string,     // Markdown content
  output_path: string,  // PDF output path
  title?: string        // Document title
}

Default Output

All research tools automatically save results to ./dev-docs/research/:

dev-docs/research/
├── standard/        # research tool output
│   └── 2026-01-10-email-marketing-platforms.md
├── quick/           # quick_research tool output
│   └── 2026-01-10-what-is-dkim.md
└── deep/            # deep_research tool output (with PDF)
    ├── 2026-01-10-email-campaign-analysis.md
    └── 2026-01-10-email-campaign-analysis.pdf

Filename format: {YYYY-MM-DD}-{sanitized-query}.md

To skip file saving, pass skip_save: true to any tool.

URL Quality Scoring

Sources are automatically scored 0.0-1.0 using the Tranco top 1M domain list plus recency weighting.

Domain Scoring (Tranco-Based)

| Tier | Tranco Rank | Score | Examples | |------|-------------|-------|----------| | Authoritative | Top 10K | 0.9-1.0 | Wikipedia, GitHub, .edu, .gov, arxiv | | Reputable | Top 100K | 0.7-0.89 | Stack Overflow, Ars Technica, HN | | Neutral | Top 1M | 0.5-0.69 | General blogs, YouTube | | Questionable | Outside 1M | 0.2-0.49 | SEO farms, content mills | | Blocked | N/A | 0.0 | Pinterest, Facebook (filtered out) |

Recency Multiplier

| Age | Multiplier | |-----|------------| | < 6 months | +10% boost | | 6-12 months | +5% boost | | 12-24 months | Neutral | | 24-36 months | -5% penalty | | 36-48 months | -10% penalty | | > 48 months | -15% penalty |

Final Score = min(1.0, Tranco_Score × Recency_Multiplier)

The Tranco list auto-refreshes every 24 hours from tranco-list.eu.

Synthesis & Confidence

The synthesizer weights findings by source quality tier:

| Quality Tier | Weight | Confidence Rule | |--------------|--------|-----------------| | Authoritative (85%+) | 100% | HIGH if 2+ sources agree | | Reputable (65-84%) | 70% | MEDIUM with corroboration | | Neutral (40-64%) | 40% | LOW without corroboration | | Questionable (<40%) | 10% | Requires verification |

Cost Tracking

Research reports include token usage and cost estimates (Jan 2026 pricing):

| Model | Input $/1M | Output $/1M | |-------|------------|-------------| | gemini-2.5-pro | $1.25 | $10.00 | | gemini-2.5-flash | $0.075 | $0.30 | | gemini-2.5-flash-lite | $0.01875 | $0.075 | | perplexity/sonar-pro | $3.00 | $15.00 | | perplexity/sonar | $1.00 | $1.00 |

Configuration Reference

| Variable | Description | Default | |----------|-------------|---------| | BACKEND | gemini, vertex, or openrouter | gemini | | GEMINI_API_KEY | Gemini API key | Required for gemini | | GEMINI_MODEL | Gemini model | gemini-2.5-flash | | VERTEX_PROJECT_ID | GCP project ID | Required for vertex | | VERTEX_LOCATION | Vertex AI region | us-central1 | | OPENROUTER_API_KEY | OpenRouter API key | Required for openrouter | | OPENROUTER_MODEL | OpenRouter model ID | google/gemini-2.5-flash | | QUALITY_THRESHOLD | Min source quality (0-1) | 0.4 | | LOG_LEVEL | debug, info, warn, error | info |

Comparison: Grounded vs Non-Grounded

| Feature | Gemini/Vertex | OpenRouter | |---------|--------------|------------| | Real-time web search | Yes | Model-dependent | | Cited sources | Yes (automatic) | Varies | | Source quality scoring | Yes | Yes | | Model flexibility | Limited | Any model | | Best for | Research with sources | Synthesis, analysis |

Example Output

# Research: Email deliverability best practices

*Generated: 2026-01-10T16:23:14.427Z*
*Sources: 45/52 verified | Quality threshold: 0.4+*
*Cost: $0.0023 (1,847 input + 892 output tokens)*

## Executive Summary
...

## Key Findings

### 1. SPF, DKIM, and DMARC are mandatory
...
**Sources:** [Google Docs](url), [RFC 7489](url)
**Confidence:** HIGH (2+ authoritative sources)

## Sources

### Authoritative (90%+)
- [Google Postmaster Tools](https://...) - 95%
- [RFC 7489](https://...) - 98%
...

License

MIT License - Copyright (c) 2026 Edward Zisk. See LICENSE for details.

Attribution Required: If you fork, sell, or build products using this software, please provide attribution to the original author.

Contributing

PRs welcome! Please test changes locally before submitting.

Credits

Created as a free tool for the community. If you find it useful, star the repo or listen to my music :)

Quick Setup
Installation guide for this server

Install Package (if required)

npx @modelcontextprotocol/server-ez-deep-research-mcp

Cursor configuration (mcp.json)

{ "mcpServers": { "ezoosk-ez-deep-research-mcp": { "command": "npx", "args": [ "ezoosk-ez-deep-research-mcp" ] } } }