MCP Servers

A collection of Model Context Protocol servers, templates, tools and more.

A MCP server for Sui data stack providing orchestration across gRPC, GraphQL, and Archival Service

Created 5/8/2026
Updated about 3 hours ago
Repository documentation and setup instructions

sui-mcp-server

A local-first MCP server that lets any AI agent — Claude Desktop, Claude Code, a custom Anthropic SDK loop, or anything else that speaks the Model Context Protocol — query the Sui blockchain through its new data stack: gRPC, GraphQL, and the Archival Service.

Plug it into your agent and you can ask plain-English questions about Sui:

  • "What's happening on Sui mainnet right now?"
  • "Walk me through the recent activity for address 0x…"
  • "Fetch object 0x5 at version 100 — which endpoint had to answer?"
  • "What public functions does the sui_system package expose?"

The server handles transport selection, retention boundaries, schema introspection, and response shaping. The agent just answers the question.


What this actually is

The Model Context Protocol (MCP) is a standard way for AI agents to call external tools. An MCP server exposes a catalog of tools (think: REST endpoints with rich descriptions); an MCP client — usually an LLM-driven agent — picks which tools to call to satisfy a user's request.

This repo is an MCP server for Sui data. It runs locally as a Node subprocess, speaks MCP over stdio, and exposes 29 tools mapped to the new Sui APIs. The example agent at examples/agent.ts is a thin Anthropic-SDK loop that connects to the server, lets Claude pick tools, and runs the conversation. You can use that agent as-is, or swap in any other MCP client.

Why this exists

Sui's JSON-RPC sunsets on 2026-07-31. The replacement is a three-layer data stack — gRPC (low-latency point reads), GraphQL RPC (relational reads), and the Archival Service (deep history) — that's well-shaped for production indexers and SDKs.

It's not yet shaped for agents. Agents want:

  • A small set of task-shaped tools that map to common questions ("balance of address X", "what happened in tx Y").
  • An escape hatch when the curated tools don't fit, with schema discovery so the agent can navigate without prior knowledge.
  • Auto-routing across endpoints so the agent doesn't need to know about retention boundaries.
  • Responses pre-shaped for an LLM context window — no BigInt serialization errors, no raw Uint8Arrays, no proto-style empty wrappers.

This server provides exactly that. It's a thin layer — the heavy lifting still happens on the Sui side — but the layer is what makes Sui usable from a one-shot LLM prompt instead of a multi-week SDK integration.

What you get

A hybrid surface combining curated intent tools with a schema-introspective dispatcher:

| Category | Tools | What they're for | | --- | --- | --- | | Ledger reads (gRPC) | sui_get_object, sui_get_transaction, sui_get_checkpoint, sui_get_epoch, sui_batch_get_objects, sui_batch_get_transactions, sui_get_service_info | Single-entity lookups; defaults to live → archive auto-routing | | Live state (gRPC) | sui_get_balance, sui_list_balances, sui_list_owned_objects, sui_get_coin_info, sui_list_dynamic_fields | Address-level live state | | Move packages (gRPC) | sui_get_package, sui_get_function, sui_get_datatype, sui_list_package_versions | Contract introspection | | Relational reads (GraphQL) | sui_address_overview, sui_chain_tip, sui_recent_checkpoints, sui_object_history_step, sui_transaction_rich | Cross-entity queries in one round-trip | | Streaming (gRPC) | sui_stream_checkpoints | Tail of chain — bounded window per call | | Execution (gRPC) | sui_simulate_transaction, sui_execute_transaction (gated) | Dry-run always; submit only when explicitly enabled | | Introspection | sui_describe_grpc_services, sui_list_endpoints, sui_graphql_introspect | "What can I do?" — feeds the dispatcher pattern | | Escape hatches | sui_grpc_call, sui_graphql_query | Raw passthrough when no curated tool fits |

Every response includes a routing trace — source, endpoint, network, latency — so both the agent and a human watching the terminal can see exactly which transport answered.

Quick start

Requires Node 22+.

git clone <this repo> sui-mcp-server
cd sui-mcp-server
npm install
npm run build

Sanity check (no API key, no network):

node scripts/smoke.mjs

You should see 29 tools register and one offline call succeed.

Talk to it interactively (needs an Anthropic API key):

export ANTHROPIC_API_KEY=sk-ant-...
npm run agent

You'll get a sui[mainnet]> prompt. Try asking "What's the latest checkpoint?" and follow up with "and how does that compare to testnet?" — the conversation persists across turns, so Claude builds context. Use /network testnet to flip networks mid-session, /help for the full command list, and Ctrl-D to exit.

Example prompts to try

These exercise different paths through the server. Each one is the kind of thing an agent should be able to answer without the user knowing anything about gRPC vs GraphQL.

Tip-of-chain reads (gRPC + GraphQL relational):

  • "What's the latest checkpoint on Sui mainnet? Include the timestamp and network total transactions."
  • "Compare the reference gas prices on mainnet and testnet right now."

Address profiling (GraphQL — one query for many things):

  • "Give me a profile of address 0x… — balance, top owned objects, recent transactions."

Move package introspection:

  • "What public functions does the package at 0x3 (sui_system) expose? Pick one and show me its full signature."

Auto-routing across live and archive:

  • "Fetch object 0x5 at version 100 and tell me which endpoint had to answer." — Live full nodes typically don't retain versions that old; the trace will show the live attempt returning empty before the call falls back to the Archival Service.

Streaming + cadence:

  • "Stream 5 checkpoints and tell me the average time between them."

Simulation (safe — never touches live state):

  • "Simulate this BCS-encoded transaction and tell me whether it would succeed and how much gas it would burn." (Pass the bytes inline.)

When you ask follow-ups, Claude reuses what it already learned — no redundant tool calls.

Under the hood

The Sui data stack in 30 seconds

The new stack has three pieces, served behind public-good URLs:

  • gRPC fullnode (fullnode.<network>.sui.io): the canonical low-latency reads. Five services — LedgerService (objects, transactions, checkpoints, epochs), StateService (live balances and owned objects), MovePackageService (contract introspection), SubscriptionService (server-streaming), TransactionExecutionService (submit + simulate).
  • GraphQL RPC (graphql.<network>.sui.io/graphql): an indexer-backed relational layer. Best for cross-entity queries — "address X's balance + owned objects + last 10 transactions" in one round-trip.
  • Archival Service (archive.<network>.sui.io): the same LedgerService interface as the live full node, but backed by long-retention storage. Use it when an object/transaction/checkpoint is older than the live full node still holds.

That last point is the architectural keystone: the Archival Service implements the same gRPC interface as the live full node. Same client code, same response shapes — only the URL differs. This server takes advantage of that symmetry to do live → archive auto-fallback transparently.

How a tool call flows

agent ──MCP─▶ sui-mcp-server ──┬─▶ SuiGrpcClient ──▶ live full node (gRPC)
                                │                ╰──▶ Archival Service (gRPC, fallback)
                                ╰─▶ fetch() ────▶ GraphQL endpoint

Each curated tool wraps a service-specific call shape (the right read_mask, the right oneofKind variant, the right BigInt coercion) so the agent doesn't have to know proto-ts conventions. The escape-hatch tools (sui_grpc_call, sui_graphql_query) bypass the wrappers when a request doesn't fit a curated shape.

Auto-routing across live and archive

For Ledger reads, the default source: "auto" means: try the live full node first, inspect the response with a per-shape "is this empty?" predicate, fall back to archive if live errored or returned an empty payload. The trace records both attempts so it's clear what happened:

{
  "source": "archival",
  "label": "ledger.GetObject (after live: empty)",
  "rationale": "...auto-route: live returned empty payload (likely past retention) → fell back to Archival Service",
  "endpoint": "https://archive.mainnet.sui.io",
  "latencyMs": 142
}

Override with source: "live" or source: "archive" only when you have a specific reason (benchmarking, you already know the version is old, etc.).

Why GraphQL doesn't auto-fall-back to archive

Sui's GraphQL RPC is indexer-backed and already composes the Archival Service as one of its data sources. So an empty GraphQL result usually means the indexer doesn't have the entity at the requested level — running a parallel archive query rarely helps and adds noise.

The right escalation when GraphQL returns null for a specific id/digest/checkpoint is to drop to the gRPC LedgerService curated tool with source: "auto". That covers retention-boundary cases the indexer hasn't materialized, without doing redundant work for cases the indexer does cover. The tool descriptions and sui_describe_grpc_services notes both encode this policy so the agent inherits it for free.

LLM-friendly response shaping

Proto responses contain things JSON.stringify can't handle: BigInt for uint64/int64 fields, Uint8Array for digests and BCS bytes, and google.protobuf.Timestamp { seconds, nanos } blobs. The jsonSafe walker in src/util/json.ts produces a clean view: bigints become strings, byte arrays become 0x-hex, timestamps become { epochMs, iso }. The agent sees something it can read and quote back to the user without serializer errors.

Surviving SDK drift

@mysten/sui v2 is still iterating. Method paths drift between minor releases (client.ledgerService.getObject vs client.core.getObject vs client.getObject). The curated tools use a callFirst fallback chain so they keep working across releases; when something does break, error messages are tagged with hints about which arg shape the SDK is expecting. All SDK-specific knowledge is concentrated in one or two files, so adjustments are localized.

Using it locally

Interactive REPL (recommended)

npm run agent                          # mainnet, REPL
npm run agent -- --network testnet     # testnet, REPL
npm run agent -- --max-rounds 60       # raise the per-turn tool-call budget

The REPL preserves conversation context across turns, so follow-ups reuse what was already fetched instead of re-querying. Tool calls render as [tool 3/25] sui_chain_tip {...} on stderr — /trace off silences them.

One-shot CLI

npm run agent -- "What's the latest checkpoint on mainnet?"
npm run agent -- --network testnet "Show me a recent transaction digest"
npm run agent -- tip address tx recent     # canned examples (one at a time)

Slash commands inside the REPL

| Command | What it does | | --- | --- | | /network mainnet\|testnet | Switch network mid-session — takes effect immediately, no restart | | /rounds <N> | Adjust the per-turn tool-call budget (default 25) | | /clear | Reset conversation history | | /history | Show conversation length and session settings | | /tools | List the MCP tools the server exposed | | /trace on\|off | Show or hide tool-call traces | | /help | Slash-command reference | | /quit | Exit (Ctrl-D works too) |

For deeper testing recipes — auto-routing fallback verification, MCP Inspector usage, troubleshooting the cap-hit case — see TESTING.md.

Using it in production

From Claude Desktop

Edit ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) and add:

{
  "mcpServers": {
    "sui": {
      "command": "node",
      "args": ["/absolute/path/to/sui-mcp-server/dist/server.js"],
      "env": {
        "SUI_MCP_DEFAULT_NETWORK": "mainnet"
      }
    }
  }
}

Restart Claude Desktop. The Sui tools appear in the tool palette automatically.

From Claude Code

Drop the same JSON into .claude/mcp.json at your project root.

As a hosted service

The current build speaks MCP over stdio — the canonical local-process transport. To deploy it as a remote service, wrap with one of the supported MCP HTTP/SSE transports. The server logic is transport-agnostic; the swap happens in two lines of src/server.ts. See the MCP TypeScript SDK README for the latest transport options.

For a multi-tenant deployment, run one server process per concurrent agent (cheap — it's a Node subprocess that reaches out to public-good Sui endpoints) and pass per-tenant config via env vars.

Configuration

All via env vars. Per-network overrides take precedence over global ones; both take precedence over the public-good defaults.

| Variable | Default | Purpose | | --- | --- | --- | | SUI_MCP_DEFAULT_NETWORK | mainnet | Network used when a tool call doesn't specify one | | SUI_MAINNET_GRPC_URL / SUI_TESTNET_GRPC_URL | public-good URL | Per-network gRPC endpoint override | | SUI_MAINNET_GRAPHQL_URL / SUI_TESTNET_GRAPHQL_URL | public-good URL | Per-network GraphQL endpoint override | | SUI_MAINNET_ARCHIVE_URL / SUI_TESTNET_ARCHIVE_URL | public-good URL | Per-network Archival Service endpoint override | | SUI_GRPC_URL / SUI_GRAPHQL_URL / SUI_ARCHIVE_URL | — | Global fall-throughs | | SUI_MCP_ENABLE_EXECUTION | false | Expose sui_execute_transaction. See safety model below | | SUI_MCP_ENABLE_SUBSCRIPTIONS | true | Expose sui_stream_checkpoints | | SUI_MCP_STREAM_MAX_FRAMES | 10 | Per-call cap on streamed frames | | SUI_MCP_STREAM_MAX_SECONDS | 30 | Per-call wall-clock cap on streaming |

Default endpoints:

  • Mainnet: fullnode.mainnet.sui.io / graphql.mainnet.sui.io/graphql / archive.mainnet.sui.io
  • Testnet: fullnode.testnet.sui.io / graphql.testnet.sui.io/graphql / archive.testnet.sui.io

Safety model

Reads are unrestricted. Anything LedgerService / StateService / MovePackageService / GraphQL exposes is fair game.

Simulation (sui_simulate_transaction) is always available — it's a dry-run, no state changes, no fees.

Execution is gated behind SUI_MCP_ENABLE_EXECUTION=true. Even when on, the server never accepts private keys — it only forwards opaque, pre-signed transaction bytes. Signing happens in the user's wallet/SDK; the MCP server is a transport, not a wallet.

Subscriptions are bounded per-call (max frames + max seconds) so a misbehaving agent can't tie up the connection. Tail the chain by calling repeatedly with the returned cursor.

Raw gRPC (sui_grpc_call) is read-only. It explicitly blocks executeTransaction and subscribeCheckpoints — those have their own dedicated tools with safety wrappers.

Project layout

src/
├── server.ts                # MCP entry point — stdio transport + tool registration
├── config.ts                # network + endpoint resolution + feature flags
├── clients/
│   ├── grpc.ts              # SuiGrpcClient wrapper, callFirst, grpcAutoCall, looksEmpty
│   └── graphql.ts           # plain fetch-based GraphQL client
├── util/
│   ├── json.ts              # bigint + Uint8Array + Timestamp → JSON-safe walker
│   ├── format.ts            # ok() / fail() — MCP content shape with routing trace
│   └── validate.ts          # input validators (object_id, digest, address, etc.)
└── tools/
    ├── ledger.ts            # gRPC LedgerService — auto-routed
    ├── state.ts             # gRPC StateService
    ├── move_package.ts      # gRPC MovePackageService
    ├── graphql.ts           # GraphQL curated tools + escape hatch
    ├── subscription.ts      # gRPC SubscriptionService — bounded window
    ├── execution.ts         # Simulate (always) + Execute (gated)
    ├── grpc_raw.ts          # sui_grpc_call escape hatch
    └── introspect.ts        # service catalog + endpoint listing

examples/
└── agent.ts                 # ~250-line interactive CLI agent

scripts/
└── smoke.mjs                # offline sanity check

A few quirks the wrappers smooth over (learned the hard way):

  • SuiGrpcClient constructor takes { network, baseUrl }. Calling the URL field url produces an opaque error inside GrpcWebFetchTransport.makeUrl.
  • Without an explicit read_mask, gRPC responses contain only identifying digests. The curated tools pass sensible default paths.
  • Method access drifts between SDK minors. The callFirst fallback chain handles this transparently.
  • Object versions are Lamport timestamps, not a +1 counter. sui_object_history_step encapsulates the canonical previousTransaction → effects.changedObjects.inputVersion walk.

Further reading

License

Apache-2.0.

Quick Setup
Installation guide for this server

Install Package (if required)

npx @modelcontextprotocol/server-sui-mcp-server

Cursor configuration (mcp.json)

{ "mcpServers": { "abhinavg6-sui-mcp-server": { "command": "npx", "args": [ "abhinavg6-sui-mcp-server" ] } } }