M
MCP For Portfolio
MCP server by Chauhan-yuvraj
创建于 11/21/2025
更新于 24 days ago
README
Repository documentation and setup instructions
Here are the professional README.md files for both your Backend and Frontend.
1. Backend README
Location: my-portfolio-ai-backend/README.md
# 🧠 MCP AI Portfolio Backend
This is the intelligent backend for my personal portfolio website. It leverages **Google Gemini 2.0 Flash** and the **Model Context Protocol (MCP)** to create a "living" AI agent that knows everything about my professional background, GitHub stats, and blogs.
Built with **FastAPI** and **Python**, it acts as a bridge between the frontend and the LLM, enabling tool calling and real-time data fetching.
## 🚀 Features
- **MCP Architecture:** Uses Anthropic's Model Context Protocol to standardize tool usage.
- **Single Source of Truth:** Reads data dynamically from modular Markdown files (`data/`).
- **Live GitHub Stats:** Fetches real-time stars, repositories, and streak data using the GitHub API.
- **Context Aware:** Handles user history and remembers names across sessions.
- **Gemini 2.0 Flash:** Powered by Google's latest high-speed model.
## 📂 Project Structure
```text
/
├── main.py # FastAPI entry point & Chat Logic
├── server.py # MCP Server & Tool Definitions
├── data/ # The Knowledge Base (Markdown)
│ ├── about.md # Bio, Contact, Skills
│ ├── experience.md # Work History
│ ├── projects.md # Static Projects List
│ └── blogs.md # Blog Links
├── .env # Secrets (Not on GitHub)
└── requirements.txt # Dependencies
🛠️ Installation & Setup
-
Clone the repository:
git clone <repo-url> cd my-portfolio-ai-backend -
Create a virtual environment (Optional but recommended):
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate -
Install dependencies:
pip install -r requirements.txt -
Set up Environment Variables: Create a
.envfile in the root directory:GOOGLE_API_KEY=your_gemini_api_key GITHUB_TOKEN=your_github_classic_token GITHUB_USERNAME=Chauhan-yuvraj DEV_TO_USER=uvizhere -
Run the Server:
uvicorn main:app --reloadThe API will be live at
http://127.0.0.1:8000/chat.
📡 API Endpoint
POST /chat
Payload:
{
"message": "What is his GitHub streak?",
"username": "Aditya",
"history": [
{"role": "user", "content": "Hi"},
{"role": "model", "content": "Hello Aditya!"}
]
}
☁️ Deployment (Render)
- Connect repo to Render.
- Select Python 3 Runtime.
- Build Command:
pip install -r requirements.txt - Start Command:
uvicorn main:app --host 0.0.0.0 --port $PORT - Add Environment Variables from your
.envfile.
---
### 2. Frontend README
**Location:** `my-portfolio-frontend/README.md` (or your root folder)
```markdown
# 🎨 Yuvraj's Portfolio & AI Chatbot
A modern, highly interactive portfolio website built with **Next.js**, **TypeScript**, and **Tailwind CSS**.
The highlight of this project is the integrated **AI Assistant (UV)**. Unlike standard chatbots, UV uses the **Model Context Protocol (MCP)** to fetch real-time data about my GitHub activity, read my resume, and answer questions about my specific tech stack.
## ✨ Key Features
- **🤖 AI Agent Integration:** Embeds a custom MCP-powered chatbot that can browse my projects and stats.
- **💾 Persistent Memory:** Remembers the user's name across sessions using LocalStorage.
- **📱 Responsive Design:** Features a dedicated mobile drawer for the chat interface.
- **🎭 Animations:** Smooth transitions and "dust" text effects using **Framer Motion**.
- **🖼️ Dynamic Avatars:** Generates custom pixel-art avatars for users via DiceBear API.
## 🛠️ Tech Stack
- **Framework:** Next.js 14 (App Router)
- **Language:** TypeScript
- **Styling:** Tailwind CSS
- **Animations:** Framer Motion
- **Icons:** Lucide React
- **Backend:** Python (FastAPI + MCP) [Hosted separately]
## 🚀 Getting Started
1. **Clone the repository:**
```bash
git clone <repo-url>
cd my-portfolio
-
Install dependencies:
npm install # or yarn install -
Configure the Backend: Open
src/services/chatService.tsand update theAPI_URLto point to your backend:const API_URL = "http://127.0.0.1:8000/chat"; // Local // OR const API_URL = "https://your-backend.onrender.com/chat"; // Production -
Run the development server:
npm run dev -
Open http://localhost:3000 with your browser.
📂 Architecture Overview
- User asks a question (e.g., "List his repos").
- Frontend sends the message + history to the FastAPI Backend.
- Backend initializes the Gemini 2.0 Agent with a System Prompt.
- Gemini decides it needs a tool (
get_github_repos) and requests it. - MCP Server executes the Python function to fetch data from GitHub API.
- Gemini summarizes the raw data and sends a natural response back to the Frontend.
📄 License
This project is open-source and available under the MIT License.
快速设置
此服务器的安装指南
安装包 (如果需要)
uvx mcp-for-portfolio
Cursor 配置 (mcp.json)
{
"mcpServers": {
"chauhan-yuvraj-mcp-for-portfolio": {
"command": "uvx",
"args": [
"mcp-for-portfolio"
]
}
}
}