An MCP-powered research agent built with LangGraph that searches the web, retrieves relevant pages, and returns structured Markdown answers through a clean Streamlit interface.
MCP Research Agent
A LangGraph-powered research assistant that uses three MCP servers to search the internet and fetch full webpage content, then formats a clean markdown report.
Built to demonstrate:
- Creating custom MCP servers with FastMCP
- Connecting multiple MCP servers via
MultiServerMCPClient - LangGraph agent graph with local memory (
MemorySaver) - LangGraph Studio / Docker graph visualization
- LangChain OpenAI model integration
Demo
https://github.com/user-attachments/assets/305ef919-6daa-4570-8b78-3a65fa78a186
MCP Servers
| Server | File | Transport | Tools |
|---|---|---|---|
| DuckDuckGo Search | servers/search_server.py | stdio | search_web |
| Webpage Fetcher | servers/fetch_server.py | stdio | fetch_page |
| Wikipedia search | servers/wikipedia_server.py | stdio | search_wikipedia |
Local Tools
| Tool | File | Purpose |
|---|---|---|
| format_report | app/tools.py | Formats findings into clean markdown |
Memory
Uses MemorySaver (in-process) keyed by thread_id.
Each session maintains its full conversation history locally.
Project Structure
mcp-research-agent/
├── app/
│ ├── __init__.py
│ ├── graph.py ← LangGraph agent graph + CLI entrypoint
│ ├── prompts.py ← System prompt
│ └── tools.py ← Local LangChain tools
├── servers/
│ ├── __init__.py
| ├── wikipedia_server.py ← MCP Server 1: Wiki search
│ ├── search_server.py ← MCP Server 2: DuckDuckGo search
│ └── fetch_server.py ← MCP Server 3: Webpage fetcher
├── .env ← your keys
├── .env.example
├── langgraph.json ← LangGraph Studio config
├── requirements.txt
├── streamlit_app.py ← Streamlit app entrypoint
└── README.md
Setup & Installation
1. Create a virtual environment
python -m venv .venv
source .venv/bin/activate # macOS/Linux
# .venv\Scripts\activate # Windows
2. Install dependencies
pip install -r requirements.txt
3. Add your OpenAI API key or local model
cp .env.example .env
# open .env and set OPENAI_API_KEY=sk-...
5. Run the agent in the terminal
python -m app.graph
or streamlit :
streamlit run streamlit_app.py
6. Visualize with LangGraph Studio (Docker)
Install the LangGraph CLI, then Start the local LangGraph development server:
pip install langgraph-cli
langgraph dev
Open the Studio URL shown in the terminal (usually http://localhost:8123).
You will see the research_agent graph with all nodes, edges,
thread history, and state at every step.
Example Questions
What is MCP in LangChain and how does it work?Summarize the latest LangGraph featuresWhat are the latest advancements in AI agents in 2026?What are the differences between LangGraph and LangChain agents?
Key concepts this project demonstrates
| Concept | Where |
|---|---|
| MCP server creation with FastMCP | servers/ |
| Multiple MCP servers via MultiServerMCPClient | app/graph.py |
| LangGraph StateGraph with nodes + edges | app/graph.py |
| Conditional routing with tools_condition | app/graph.py |
| Local memory with MemorySaver | app/graph.py |
| LangChain OpenAI model binding | app/graph.py |
| Local LangChain tool alongside MCP tools | app/tools.py |
| LangGraph Studio visualization config | langgraph.json |