A lightweight Node.js demonstration of MCP using OpenAI function-calling to show how LLMs can use external tools with proper context management.
🧩 MCP Demo (Model + Tool + Context)
A lightweight Node.js demonstration of MCP using OpenAI function-calling to show how LLMs can use external tools with proper context management.
📦 Project Structure
mcp-demo/
├── package.json
├── src/
│ ├── app.js # Entry point
│ ├── llm/
│ │ └── openai.js # OpenAI Chat Completions API
│ ├── mcp/
│ │ ├── context.js # Context builder + tool result enrichment
│ │ ├── orchestrator.js # MCP main workflow (runMCP)
│ │ ├── executor.js # Executes validated tool calls
│ │ └── toolRegistry.js # Tool list → OpenAI tool schemas
│ ├── tools/
│ │ ├── revenueTool.js # get_revenue function
│ │ └── timeTool.js # get_time function
│ └── utils/
│ └── logger.js # Logging utilities
└── README.md
🚀 Quick Start
1. Install dependencies
npm install
2. Set OpenAI API key
export OPENAI_API_KEY=your_api_key_here
3. Run a demo
Demo 1
node src/app.js "Doanh thu tháng 02/2025 là bao nhiêu?"
Expected output:
LLM Response: {"index":0,"message":{"role":"assistant","content":null,"tool_calls":[{"id":"call_...","type":"function","function":{"name":"get_revenue","arguments":"{\"month\":\"02/2025\"}"}}],"finish_reason":"tool_calls"}
LLM Response after tool call: {"index":0,"message":{"role":"assistant","content":"Doanh thu tháng 02/2025 là 150.000.","finish_reason":"stop"},...}
Final Answer:
Doanh thu tháng 02/2025 là 150.000.
Demo 2 - Tool call 2 times
node src/app.js "Doanh thu tháng này là bao nhiêu?"
Expected output:
LLM Response: {"index":0,"message":{"role":"assistant","content":null,"tool_calls":[{"id":"call_CyQQAsdYY3c0vl53ilbDdgAv","type":"function","function":{"name":"get_time","arguments":"{}"}}],"refusal":null,"annotations":[]},"logprobs":null,"finish_reason":"tool_calls"}
LLM Response after tool call: {"index":0,"message":{"role":"assistant","content":null,"tool_calls":[{"id":"call_qfY30jluw01Ltq4QMqX70Q0j","type":"function","function":{"name":"get_revenue","arguments":"{\"month\":\"03/2026\"}"}}],"refusal":null,"annotations":[]},"logprobs":null,"finish_reason":"tool_calls"}
LLM Response after tool call: {"index":0,"message":{"role":"assistant","content":"Doanh thu tháng 03/2026 là 150,000.","refusal":null,"annotations":[]},"logprobs":null,"finish_reason":"stop"}
Final Answer:
Doanh thu tháng 03/2026 là 150,000.
🧠 Core Components
src/mcp/context.js
Manages contextual information and tool result enrichment:
-
buildInitialContext({ userId, locale, appState })- Builds system message with user metadata, business rules, and tool usage examples
- Returns locale-aware context for LLM instructions
-
enrichConversationWithToolResult(conversation, { toolName, result, toolCallId })- Appends tool response to conversation history
- Maintains proper message chain for multi-turn tool use
-
resolveLocaleText(locale, key)- Retrieves locale-specific messages (Vietnamese, English)
- Supports extensibility for new languages
src/mcp/orchestrator.js
Main MCP workflow (runMCP):
- Build context - Initialize system message with user locale and business rules
- Call LLM - Send user query with available tools
- Check for tool calls - If LLM wants to use a tool:
- Extract tool name and arguments
- Execute the tool
- Append tool result to conversation
- Call LLM again with result
- Return answer - Final text response from LLM
src/mcp/executor.js
Tool execution pipeline:
- Retrieves tool definition from registry
- Validates arguments against Zod schema
- Executes tool handler
- Returns result or error
src/mcp/toolRegistry.js
Tool management:
- Maintains list of all available tools
- Converts tool definitions to OpenAI-compatible schemas
- Provides tool lookup by name
src/tools/revenueTool.js
Example revenue data tool:
{
name: "get_revenue",
description: "Get revenue for a specific month in MM/YYYY format",
schema: z.object({
month: z.string().describe("Month in MM/YYYY format (e.g., '02/2025')")
}),
handler: async ({ month }) => { /* returns revenue data */ }
}
src/llm/openai.js
OpenAI integration:
- Configures GPT-4o-mini model
- Handles tool schema formatting
- Manages function-calling flow
🧪 Demo Usage Examples
Example 1: Revenue query with month
node src/app.js "Doanh thu tháng 02/2025 là bao nhiêu?"
Flow:
- LLM identifies month
02/2025from Vietnamese query - Calls
get_revenuewith{"month":"02/2025"} - Returns formatted revenue
Example 2: Current time query
node src/app.js "Hôm nay là ngày mấy"
Flow:
- LLM recognizes date query
- May call time tool if configured
- Returns current date response
Example 3: Query with missing context
node src/app.js "Doanh thu là bao nhiêu?"
Expected:
- Context rules ask for month clarification
- LLM prompts user for
MM/YYYYformat
🛠️ How MCP Works: The Flow
User Input
↓
[1] buildInitialContext
↓
[2] callLLM with tools
↓
└─→ LLM response: tool_calls?
├─ YES: [3] executeTool
│ ↓
│ [4] enrichConversationWithToolResult
│ ↓
│ [2] callLLM again (with tool result)
│
└─ NO: Return final_response
🛡️ Key Features
✅ Context Injection
System context includes:
- User locale and preferences
- Business rules (e.g., month format requirements)
- Tool usage examples
- Metadata (userId, environment)
✅ Schema Validation
- Zod schemas enforce type safety
- Invalid arguments rejected before execution
- Clear error messages
✅ Tool Result Feedback Loop
- Tool output automatically appended to conversation
- LLM can reason over tool results
- Supports multi-turn tool usage
✅ Multilingual Support
- Vietnamese and English prompts
- Extensible locale resolver
- Business rule enforcement per locale
🔧 Extending the Demo
Add a new tool
- Create
src/tools/myTool.js:
import { z } from "zod";
export const myTool = {
name: "my_function",
description: "What this tool does",
schema: z.object({
param1: z.string().describe("Parameter description"),
}),
handler: async ({ param1 }) => {
return { result: "data" };
},
};
- Register in
src/mcp/toolRegistry.js:
import { myTool } from "../tools/myTool.js";
export const tools = [revenueTool, timeTool, myTool];
Improve month validation
In revenueTool.js, add regex validation:
schema: z.object({
month: z
.string()
.regex(/^(0[1-9]|1[0-2])\/\d{4}$/, "Format must be MM/YYYY")
.describe("Month in MM/YYYY format"),
})
Connect real database
Modify revenueTool.handler:
handler: async ({ month }) => {
const data = await db.query("SELECT * FROM revenue WHERE month = ?", [month]);
return { month, revenue: data[0].revenue };
}
Add authentication
In orchestrator.js:
const context = buildInitialContext({
userId: req.user.id,
locale: req.user.locale,
appState: { environment: "prod" }
});
📊 Architecture Diagram
┌─────────────────────────────────────────────────────┐
│ User Query (Vietnamese/English) │
└──────────────────┬──────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ orchestrator.js: runMCP() │
│ - buildInitialContext() │
│ - Add system message + user query │
└──────────────────┬──────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ openai.js: callLLM() │
│ - Send to GPT-4o-mini with tool schemas │
└──────────────────┬──────────────────────────────────┘
↓
┌─────────┴─────────┐
↓ ↓
[Tool Calls] [Direct Answer]
↓ ↓
executor.js RETURN
- Resolve tool
- Validate args
- Execute handler
↓
enrichConversationWithToolResult()
↓
callLLM() again
↓
Return final answer
🧾 Troubleshooting
Error: Invalid input expected string, received undefined
Cause: LLM did not provide required parameters
Solution: Improve system prompt with clear examples
Error: Invalid parameter: messages with role 'tool' must be a response to preceeding message with tool_calls
Cause: Message chain broken, tool message without matching assistant message
Solution: Ensure enrichConversationWithToolResult is called with full conversation history including assistant message
Error: 401 Unauthorized when calling OpenAI
Cause: Invalid or missing API key
Solution:
export OPENAI_API_KEY=sk-...
echo $OPENAI_API_KEY # verify it's set
LLM not using tools
Cause: Tool schema not properly formatted or example not clear
Solution:
- Check Zod schema conversion in
toolRegistry.js - Add explicit tool usage example in
context.jssystemMessage - Verify tool names are included in OpenAI request
📝 Recommended Next Steps
- [ ] Add persistent conversation history (database)
- [ ] Implement rate limiting for API calls
- [ ] Add monitoring/logging for tool execution
- [ ] Create integration tests for MCP flow
- [ ] Add TypeScript for type safety
- [ ] Document cost estimation per tool call
- [ ] Implement retry logic for failed tool calls
- [ ] Add support for streaming responses
🏗️ Technology Stack
- Runtime: Node.js v24+
- LLM: OpenAI GPT-4o-mini
- Schema Validation: Zod
- Environment: dotenv
📄 License
MIT
🤝 Contributing
Contributions welcome! Follow these patterns:
- Use Zod for schema definitions
- Keep tool handlers pure and side-effect-free
- Test tool outputs match expected schema
- Update README with new tool examples
Made with ❤️ for MCP demos