MCP Servers

A collection of Model Context Protocol servers, templates, tools and more.

Natural Language → Tool Functions Auto-executed via LLMs. Structured. Traceable.

Created 1/2/2026
Updated about 9 hours ago
Repository documentation and setup instructions

Coreon-MCP-Execution-Engine

demo

🔀 MCP Support Branch — coreon-mcp

We have created a dedicated branch to track our integration progress.
This branch includes the latest README focused on signal support and Agent payment workflows.

🔗 Branch link:
https://github.com/Coreon-Mcp/Coreon-MCP-Execution-Engine

📌 Highlights:

  • ✅ First MCP Agent Execution Engine on Solana
  • ✅ Detects HTTP signals from paid APIs
  • ✅ Suspends execution & informs user of required payment
  • 🚧 Autonomous Agent Payments — coming soon

If you're interested in Agent Economy development, please follow this branch for more updates.

📢 Latest Update

2026-01 - mcp Support Branch Released

A new branch coreon-mcp is available for tracking our integration with
the mcp payment signal protocol on solana.

https://github.com/Coreon-Mcp/Coreon-MCP-Execution-Engine

Features include:

  • Multi-chain support (solana)
  • PaymentRequired state management

(Autonomous agent payment coming soon 🚀)

2025-09 - The project now officially supports the Claude-style MCP protocol (stdio mode) in the new alpha version.

[View full details here](https://github.com/CoreonMcp/Coreon-MCP-Execution-Engine/tree/alpha?tab=readme-ov-file#5-claude-mcp-protocol-support-stdio-adapter)

![demo](./assets/gif/Claude.gif)

1. Product Overview

Coreon-MCP-Execution-Engine provides a unified runtime for structured ToolCall chains. It allows LLM agents or users to:

Dynamically execute multiple tools in sequence

Interact via terminal, HTTP API, or Telegram

Plug in custom tools via the modular tools/ system

Use anywhere via Docker — no manual setup required

This project is inspired by the idea of decoupling agent planning from tool execution, making it perfect for backend execution engines, plugin-based AI systems, or on-chain/off-chain hybrid AI workflows.

Built-in modes: CLI / API Server / Telegram Bot Docker-native, zero local dependency Supports future extensibility with user-defined tools

2. Architecture Overview

%%{init: {'theme': 'neutral'}}%%
flowchart LR
    subgraph Input[Input Layer]
        A[User / Bot / CLI]
    end

    subgraph Planning[Planning Layer]
        B["Planner\nIntent Recognition & Generate Plan(JSON)"]
    end

    subgraph Execution[Execution Layer]
        C["Executor\nSequentially Execute ToolChain"]
    end

    subgraph Registry[Tool Registry Layer]
        D["Tool Registry\nDeclaration: name/module/function/schema"]
    end

    subgraph Toolset[Toolset]
        E{{Tools}}
        E1[Market Data\nDexScreener / Binance]
        E2[News Fetcher\nCryptoPanic / Feeds]
        E3[Social Metrics\nTwitter / Telegram]
        E4[On-chain APIs\nToken Metadata / Holders]
        E5[Custom Utils\nFormatters / Indicators]
    end

    subgraph Output[Output Layer]
        F["Response Formatter\nHuman-readable & Structured Output"]
        G["Outputs\nCLI Charts / Telegram Messages / API JSON"]
    end

    A --> B
    B --> C
    C --> D
    D --> E
    E --> E1
    E --> E2
    E --> E3
    E --> E4
    E --> E5

    E1 --> C
    E2 --> C
    E3 --> C
    E4 --> C
    E5 --> C

    C --> F
    F --> G

Main module

1. Planner (Task Parsing)

Acts as the “brain” of the MCP Engine. It takes natural language input from CLI, Telegram Bot, or API and:

- Recognizes user intent using LLM-based intent recognition.

- Generates a structured execution plan (ToolCall Chain) in JSON format.

- Breaks down complex tasks into ordered, executable steps.

2. Executor (Chained ToolCall Execution)

The “execution core” responsible for carrying out the plan generated by the Planner:

- Executes tools step-by-step or in parallel when possible.

- Handles retries, error recovery, and logging.

- Ensures the correct order of execution across dependent tasks.

3. Tool Registry (Centralized Tool Management)

A unified registry for all tools used by the MCP Engine:

- Declares each tool’s name, module path, function signature, and parameter schema.

- Stores tool metadata such as version and description.

- Allows new tools to be plugged in without changing the execution logic.

4. Connectors (CLI, Telegram Bot)

Entry points for different user interaction modes:

- CLI – Developer-friendly command-line interface for direct execution and debugging.

- Telegram Bot – Chat-based interface for instant, on-the-go interactions.

3. Installation & Run

1. Environment Requirements

- Python 3.11+

- Docker

2. How to Install Docker

macOS / Windows / Linux
  1. Download from the official Docker site:

    https://www.docker.com/products/docker-desktop

  2. Follow the installation steps.

  3. After installation, run:

    docker --version
    

    If you see version output, Docker is installed.

3. Environment Configuration (.env)

Follow these steps to get the MCP Engine running in minutes.

1. Create the Execution Environment Directory
mkdir mcp-execution-env
cd mcp-execution-env
2. Create the .environment File

Generate the environment file with required variables:

cat <<EOF > .env
MCP_LANG=EN
OPENAI_API_KEY=sk-xxxxxxxxxx
EOF

| Variable | Description | Required | | ---------------- | ---------------------- | -------- | | MCP_LANG | Language: EN or ZH | Yes | | OPENAI_API_KEY | OpenAI API Key | Yes |

Replace sk-xxxxxxxxxx with your actual OpenAI API key.

4. Quick Start

1. Pull the Docker Image

docker pull coreonmcp/coreon-mcp-execution-engine

2. Start CLI Mode

demo

docker run --rm -it --env-file .env coreonmcp/coreon-mcp-execution-engine start cli

3. API Server Mode

demo

docker run --rm -it --env-file .env -p 8080:8080 coreonmcp/coreon-mcp-execution-engine start server

4. Telegram Bot Mode

demo

docker run --rm -it --env-file .env coreonmcp/coreon-mcp-execution-engine start telegram-bot

🔗 Solana Integration

Coreon MCP Execution Engine is designed as the AI Execution Layer for Web3, with a strong focus on the Solana ecosystem .

Current support: Query balances, token metadata, DeFi data, and contract calls on Solana.

Mid-term roadmap: Natural-language swaps on PancakeSwap, AI wallet assistants, and on-chain security monitoring for Sol users.

Future vision: Expand to solana (for low-cost L2 execution) and Greenfield (for decentralized data storage), making Coreon MCP a full-stack AI interface for the entire sol ecosystem.

By bridging natural language with on-chain execution, Coreon MCP lowers entry barriers and positions sol as the first AI-Ready blockchain.

🛡️ Security Audit Report

An independent security audit was conducted by Armors Labs on the Coreon MCP Execution Engine.

  • Result: PASSED ✅
  • Auditor: Armors Labs

The full audit report is available here:
👉 Audit Report (PDF)

🌟 Future Vision

Our story is just beginning. The MCP Execution Engine will keep evolving — becoming smarter and more powerful with every iteration. We’ll continue to refine features, explore new possibilities, and work hand in hand with developers to shape the future of Web3.

Quick Setup
Installation guide for this server

Installation Command (package not published)

git clone https://github.com/Coreon-Mcp/Coreon-MCP-Agent
Manual Installation: Please check the README for detailed setup instructions and any additional dependencies required.

Cursor configuration (mcp.json)

{ "mcpServers": { "coreon-mcp-coreon-mcp-agent": { "command": "git", "args": [ "clone", "https://github.com/Coreon-Mcp/Coreon-MCP-Agent" ] } } }