MCP Servers

A collection of Model Context Protocol servers, templates, tools and more.

M
MCP Universal Bridge

๐ŸŒ‰ Universal MCP Bridge Server connecting devices to Claude, Gemini, ChatGPT and more

Created 10/13/2025
Updated 2 months ago
Repository documentation and setup instructions

๐ŸŒ‰ MCP Universal AI Bridge

Universal MCP (Model Context Protocol) Bridge Server connecting devices to multiple AI providers: Claude, Gemini, ChatGPT and more.

License: MIT TypeScript Node.js

โœจ Features

  • ๐Ÿค– Multi-Provider Support: Claude (Opus/Sonnet/Haiku), Gemini (2.0/1.5), ChatGPT (GPT-4o/o1)
  • ๐Ÿ’ฌ Session Management: Persistent conversations with history
  • ๐Ÿ“ฑ Device Tracking: Web, mobile, desktop, server devices
  • ๐ŸŒŠ Streaming Responses: Real-time SSE streaming
  • ๐Ÿ”ง Tool Calling: Function execution across all providers
  • ๐Ÿ“Š Usage Tracking: Token counting and cost calculation
  • ๐Ÿฅ Health Monitoring: Automatic provider health checks
  • ๐Ÿงน Auto-Cleanup: Background job for inactive sessions
  • ๐Ÿš€ Multiple Deployment Options: Cloudflare Workers, Vercel, Docker, Node.js

๐Ÿ“ฆ Quick Start

1. Installation

# Clone repository
git clone https://github.com/yourusername/mcp-universal-bridge.git
cd mcp-universal-bridge

# Install dependencies
npm install

# Configure environment
cp .env.example .env
# Edit .env with your API keys

2. Configuration

Add at least one API key to .env:

# Required: Add at least one
ANTHROPIC_API_KEY=sk-ant-xxx
GOOGLE_API_KEY=AIzaSyxxx
OPENAI_API_KEY=sk-xxx

# Optional: Customize models
CLAUDE_MODEL=claude-sonnet-4-5-20250929
GEMINI_MODEL=gemini-2.0-flash-exp
OPENAI_MODEL=gpt-4o

3. Run

# Development
npm run dev

# Production
npm run build
npm start

# Server runs on http://localhost:3000

๐Ÿ”Œ API Endpoints

Health & Info

# Welcome
GET /

# Health check
GET /health

# Statistics
GET /stats

Device Management

# Register device
POST /devices/register
{
  "name": "My Device",
  "type": "web",
  "capabilities": {
    "streaming": true,
    "tools": true,
    "vision": false
  }
}

# List devices
GET /devices

# Get device
GET /devices/:id

# Disconnect device
DELETE /devices/:id

Session Management

# Create session
POST /sessions
{
  "deviceId": "dev_xxx",
  "config": {
    "provider": "claude",
    "model": "claude-sonnet-4-5-20250929",
    "temperature": 0.7,
    "systemPrompt": "You are a helpful assistant"
  }
}

# Get session
GET /sessions/:id

# Delete session
DELETE /sessions/:id

Chat

# Send message
POST /chat
{
  "sessionId": "ses_xxx",
  "message": "Hello, how are you?",
  "streaming": false
}

# Stream response
POST /chat/stream
{
  "sessionId": "ses_xxx",
  "message": "Tell me a story",
  "streaming": true
}

Tool Execution

# Execute tools
POST /tools
{
  "sessionId": "ses_xxx",
  "toolResults": [
    {
      "id": "tool_xxx",
      "result": { "data": "..." },
      "error": null
    }
  ]
}

Provider Info

# List providers
GET /providers

# Get models
GET /providers/claude/models
GET /providers/gemini/models
GET /providers/chatgpt/models

Admin

# Manual cleanup
POST /admin/cleanup

# Reset statistics
POST /admin/stats/reset

๐Ÿ’ป Client SDK

TypeScript/JavaScript Example

import { BridgeClient } from './examples/client';

const client = new BridgeClient('http://localhost:3000');

// Register device
const device = await client.registerDevice('My App', 'web');

// Create session
const session = await client.createSession(device.id, {
  provider: 'claude',
  model: 'claude-sonnet-4-5-20250929',
  temperature: 0.7,
});

// Send message
const response = await client.sendMessage(session.id, 'Hello!');
console.log(response.response);

// Stream response
for await (const chunk of client.streamMessage(session.id, 'Tell me a story')) {
  process.stdout.write(chunk.delta);
}

Python Example

import requests

BASE_URL = "http://localhost:3000"

# Register device
device = requests.post(f"{BASE_URL}/devices/register", json={
    "name": "Python Client",
    "type": "server"
}).json()

# Create session
session = requests.post(f"{BASE_URL}/sessions", json={
    "deviceId": device["device"]["id"],
    "config": {
        "provider": "gemini",
        "model": "gemini-2.0-flash-exp",
        "temperature": 0.7
    }
}).json()

# Send message
response = requests.post(f"{BASE_URL}/chat", json={
    "sessionId": session["session"]["id"],
    "message": "Hello from Python!"
}).json()

print(response["response"])

cURL Example

# Register device
DEVICE=$(curl -X POST http://localhost:3000/devices/register \
  -H "Content-Type: application/json" \
  -d '{"name":"cURL Client","type":"server"}' | jq -r '.device.id')

# Create session
SESSION=$(curl -X POST http://localhost:3000/sessions \
  -H "Content-Type: application/json" \
  -d "{\"deviceId\":\"$DEVICE\",\"config\":{\"provider\":\"chatgpt\",\"model\":\"gpt-4o\"}}" | jq -r '.session.id')

# Send message
curl -X POST http://localhost:3000/chat \
  -H "Content-Type: application/json" \
  -d "{\"sessionId\":\"$SESSION\",\"message\":\"Hello!\"}" | jq '.response'

โ˜๏ธ Deployment

Cloudflare Workers

# Install Wrangler
npm install -g wrangler

# Login
wrangler login

# Set secrets
wrangler secret put ANTHROPIC_API_KEY
wrangler secret put GOOGLE_API_KEY
wrangler secret put OPENAI_API_KEY

# Deploy
npm run deploy:cloudflare

Vercel

# Install Vercel CLI
npm install -g vercel

# Login
vercel login

# Deploy
npm run deploy:vercel

# Add environment variables in Vercel dashboard

Docker

# Build and run with Docker Compose
docker-compose up -d

# Or build manually
docker build -t mcp-bridge .
docker run -p 3000:3000 \
  -e ANTHROPIC_API_KEY=xxx \
  -e GOOGLE_API_KEY=xxx \
  -e OPENAI_API_KEY=xxx \
  mcp-bridge

Traditional Server

# Build
npm run build

# Run on port 3000
PORT=3000 npm start

# Or use PM2
pm2 start dist/index.js --name mcp-bridge

๐Ÿค– Supported Providers & Models

Claude / Anthropic

| Model | Context | Best For | |-------|---------|----------| | claude-opus-4-5-20250514 | 200K | Complex reasoning, analysis | | claude-sonnet-4-5-20250929 | 200K | Balanced performance | | claude-3-5-sonnet-20241022 | 200K | Fast, capable | | claude-3-5-haiku-20241022 | 200K | Speed, cost-effective |

Pricing: $1-15 per million input tokens

Google Gemini

| Model | Context | Best For | |-------|---------|----------| | gemini-2.0-flash-exp | 1M | Latest experimental | | gemini-2.0-flash-thinking-exp-01-21 | 32K | Reasoning tasks | | gemini-1.5-pro | 2M | Large context | | gemini-1.5-flash | 1M | Fast responses |

Pricing: Free (experimental) or $0.075-1.25 per million tokens

OpenAI ChatGPT

| Model | Context | Best For | |-------|---------|----------| | gpt-4o | 128K | Multimodal tasks | | gpt-4o-mini | 128K | Cost-effective | | o1 | 200K | Complex reasoning | | gpt-3.5-turbo | 16K | Simple tasks |

Pricing: $0.15-30 per million input tokens

๐Ÿ“Š Features Comparison

| Feature | Claude | Gemini | ChatGPT | |---------|--------|--------|---------| | Chat Completions | โœ… | โœ… | โœ… | | Streaming | โœ… | โœ… | โœ… | | Tool Calling | โœ… | โœ… | โœ… | | Usage Tracking | โœ… | โœ… | โœ… | | Cost Calculation | โœ… | โœ… | โœ… | | System Prompts | โœ… | โœ… | โœ… | | Context Window | 200K | 1-2M | 128K |

๐Ÿ”ง Configuration

Environment Variables

# Server
PORT=3000
NODE_ENV=production

# API Keys
ANTHROPIC_API_KEY=sk-ant-xxx
GOOGLE_API_KEY=AIzaSyxxx
OPENAI_API_KEY=sk-xxx

# Model Defaults
CLAUDE_MODEL=claude-sonnet-4-5-20250929
GEMINI_MODEL=gemini-2.0-flash-exp
OPENAI_MODEL=gpt-4o

# Timeouts
API_TIMEOUT=60000

# Custom Base URLs (optional)
ANTHROPIC_BASE_URL=
OPENAI_BASE_URL=

๐Ÿ“ˆ Monitoring

Health Check

curl http://localhost:3000/health
{
  "status": "healthy",
  "timestamp": "2025-01-13T10:00:00.000Z",
  "providers": {
    "claude": { "healthy": true, "latency": 120 },
    "gemini": { "healthy": true, "latency": 95 },
    "chatgpt": { "healthy": true, "latency": 110 }
  }
}

Statistics

curl http://localhost:3000/stats
{
  "totalDevices": 42,
  "activeDevices": 15,
  "totalSessions": 128,
  "activeSessions": 8,
  "messagesSent": 5432,
  "tokensUsed": 1234567,
  "providers": {
    "claude": {
      "requests": 234,
      "tokens": 567890,
      "errors": 2,
      "avgLatency": 125
    }
  },
  "uptime": 86400000,
  "startedAt": "2025-01-12T10:00:00.000Z"
}

๐Ÿ›ก๏ธ Security

  • API Keys: Never commit API keys to version control
  • Environment Variables: Use .env files locally, secrets management in production
  • CORS: Configure CORS policy for your domain
  • Rate Limiting: Implement rate limiting for production use
  • Timeouts: Configure appropriate timeouts to prevent hanging requests

๐Ÿค Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

๐Ÿ“ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ™ Acknowledgments

๐Ÿ“ง Support


Made with โค๏ธ by the MCP Community

Quick Setup
Installation guide for this server

Installation Command (package not published)

git clone https://github.com/Clauskraft/mcp-universal-bridge
Manual Installation: Please check the README for detailed setup instructions and any additional dependencies required.

Cursor configuration (mcp.json)

{ "mcpServers": { "clauskraft-mcp-universal-bridge": { "command": "git", "args": [ "clone", "https://github.com/Clauskraft/mcp-universal-bridge" ] } } }