MCP server by rezapirighadim
MCP Transport Benchmark
A comprehensive benchmark tool for comparing MCP (Model Context Protocol) transport mechanisms: stdio vs Streamable HTTP.
Why This Matters
MCP servers can use different transport mechanisms:
| Transport | How it works | Scaling | |-----------|--------------|---------| | stdio | Spawns new subprocess per connection | 100 users = 100 processes | | HTTP | Single server handles all connections | 100 users = 1 process |
This benchmark quantifies the difference in:
- Memory usage
- Execution time
- Process count
- Request latency
- Throughput
Quick Start
# Clone the repository
git clone https://github.com/rezapirighadim/mcp-transport-benchmark
cd mcp-transport-benchmark
# Install dependencies
pip install -e .
# Run quick benchmark
mcp-benchmark --users 10,50,100
# Run stress test (HTTP only for safety)
mcp-benchmark --transport http --users 1000,5000,10000
Installation
From Source
git clone https://github.com/rezapirighadim/mcp-transport-benchmark
cd mcp-transport-benchmark
pip install -e .
Dependencies
- Python 3.10+
- fastmcp >= 2.0.0
- psutil >= 5.9.0
- rich >= 13.0.0
- matplotlib >= 3.7.0
- click >= 8.0.0
Usage
Basic Usage
# Run with default settings
mcp-benchmark
# Specify user counts
mcp-benchmark --users 10,100,1000
# Test only HTTP transport
mcp-benchmark --transport http --users 10000
# Generate all output formats
mcp-benchmark --format all
CLI Options
Usage: mcp-benchmark [OPTIONS]
Options:
-c, --config PATH Configuration file path
-u, --users TEXT Comma-separated user counts (e.g., '10,100,1000')
-t, --transport [stdio|http|both]
Transport to test (default: both)
-s, --scenario TEXT Run specific scenario by name
-o, --output-dir PATH Results output directory (default: ./results)
-f, --format [console|json|markdown|charts|all]
Output formats (default: console, json)
--calls-per-user INT Number of calls per user (default: 5)
--timeout INT Timeout per scenario in seconds (default: 300)
--max-memory INT Max memory in GB before skipping stdio (default: 32)
-v, --verbose Show verbose output during benchmark
--dry-run Show what would be tested without running
--version Show version
--help Show this message
Configuration Files
Create custom configurations in YAML:
# config/my_benchmark.yaml
benchmark:
name: "My Custom Benchmark"
scenarios:
- name: "scale_test"
users: [100, 500, 1000, 5000]
calls_per_user: 10
transports:
stdio:
enabled: true
max_concurrent: 200
http:
enabled: true
port: 8001
output:
formats: [console, json, markdown, charts]
Run with custom config:
mcp-benchmark --config config/my_benchmark.yaml
Sample Output
╔══════════════════════════════════════════════════════════════════╗
║ MCP Transport Benchmark v1.0.0 ║
╠══════════════════════════════════════════════════════════════════╣
║ Transports: stdio, streamable-http ║
║ User counts: 100, 1,000, 10,000 ║
╚══════════════════════════════════════════════════════════════════╝
┌─────────┬───────────────────┬───────────────────┬─────────────┐
│ Users │ stdio │ HTTP │ Improvement │
├─────────┼───────────────────┼───────────────────┼─────────────┤
│ 100 │ 8.2 GB / 12.4s │ 145 MB / 0.34s │ 56× / 36× │
│ 1,000 │ 41.2 GB / 142.7s │ 156 MB / 2.1s │ 264× / 68× │
│ 10,000 │ SKIPPED (memory) │ 312 MB / 18.7s │ ∞ │
└─────────┴───────────────────┴───────────────────┴─────────────┘
Key Insights:
• HTTP uses constant memory (~150-300 MB) regardless of user count
• stdio memory grows linearly (~80 MB per user)
• HTTP handles 10,000 users; stdio limited to ~500
Output Formats
Console
Rich terminal output with tables and colors.
JSON
{
"metadata": {
"timestamp": "2026-01-08T14:30:52Z",
"system": {
"os": "Darwin",
"cpu_cores": 12,
"memory_gb": 32
}
},
"results": [
{
"users": 100,
"stdio": {
"peak_memory_mb": 8200,
"total_time_seconds": 12.4,
"process_count": 100
},
"http": {
"peak_memory_mb": 145,
"total_time_seconds": 0.34,
"process_count": 1
}
}
]
}
Markdown
GitHub-ready markdown report with tables and insights.
Charts
PNG charts for memory scaling, latency distribution, and throughput comparison.
How It Works
stdio Transport
User 1 ──► [Python subprocess] ──► Tool execution
User 2 ──► [Python subprocess] ──► Tool execution
User 3 ──► [Python subprocess] ──► Tool execution
...
User N ──► [Python subprocess] ──► Tool execution
Result: N processes, N × ~80MB memory
HTTP Transport
User 1 ──┐
User 2 ──┼──► [Single HTTP Server] ──► Tool execution
User 3 ──┤
... │
User N ──┘
Result: 1 process, ~100-300MB memory (constant)
Project Structure
mcp-transport-benchmark/
├── config/
│ ├── default.yaml # Default configuration
│ └── examples/ # Example configurations
├── src/mcp_benchmark/
│ ├── servers/ # Mock MCP servers
│ ├── clients/ # Benchmark clients
│ ├── benchmarks/ # Benchmark runner
│ ├── reports/ # Output generators
│ └── cli.py # Command-line interface
├── results/ # Generated results
├── docs/ # Documentation
└── tests/ # Test suite
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Run tests:
pytest - Submit a pull request
License
MIT License - see LICENSE for details.