An autonomous, Agentic AI application that manages your GitHub account through natural language.
Autonomous MCP GitHub Agent
An autonomous, Agentic AI application that manages your GitHub account through natural language.
Built using the cutting-edge Model Context Protocol (MCP), this project connects an ultra-fast LLM (LLaMA-3 via Groq) to the GitHub API. Instead of just chatting, the AI acts as an autonomous agent: it reasons, remembers conversation context, and executes complex tool calls (like creating repositories, managing files, and searching issues) directly on your GitHub account.
(Note: Replace with your actual GIF path)
Key Features
- Agentic Reasoning: The AI decides which tools to use and when to use them based on your natural language prompt.
- Contextual Memory: The agent remembers previous steps in the conversation (e.g., "Create a repo called X... now add a file to that repo").
- Model Context Protocol (MCP): Utilizes the official
@modelcontextprotocol/server-githubvia localstdioto securely bridge the LLM and GitHub without exposing your token to the internet. - Advanced Prompt Engineering: Implements strict XML-based tool calling rules and edge-case handling (e.g., automatically creating
.gitkeepfiles since Git doesn't support empty directories). - Interactive UI: A sleek, real-time chat interface built with Streamlit, featuring an action-tracking dropdown to show the AI's internal reasoning and tool execution steps.
Tech Stack
- AI & LLM: LLaMA-3.3-70B-versatile via Groq API (using the OpenAI Python SDK)
- Architecture: Agentic AI, Tool Calling, Model Context Protocol (MCP)
- Backend: Python
asyncio, Regex parsing - Frontend: Streamlit
- Environment: Node.js (
npxto run the MCP server)
Prerequisites
Before running this project, ensure you have the following installed on your machine:
- Python 3.10+
- Node.js & npm (Required to run the MCP server dynamically via
npx) - A Groq API Key (Free at console.groq.com)
- A GitHub Personal Access Token (PAT) with
repo(Read & Write) permissions.
Installation & Setup
1. Clone the repository
git clone https://github.com/BouchraBenGhazala/Autonomous MCP Github Agent.git
cd your-repo-name
2. Create and activate a virtual environment (Recommended)
python -m venv env
# On Windows:
env\Scripts\activate
# On macOS/Linux:
source env/bin/activate
3. Install Python dependencies
pip install -r requirements.txt
4. Configure Environment Variables
Create a .env file in the root directory of the project and add your API keys:
GITHUB_PERSONAL_ACCESS_TOKEN=ghp_YOUR_GITHUB_TOKEN_HERE
GROQ_API_KEY=gsk_YOUR_GROQ_API_KEY_HERE
(Make sure to add .env to your .gitignore file so you don't accidentally publish your secrets!)
Usage
Start the Streamlit application by running the following command in your terminal:
streamlit run app.py
This will open the web interface in your default browser.
Example Prompts to try:
- "What are my latest repositories?"
- "Create a new private repository called 'mcp-test-project'."
- "Add a new file named
README.mdto the 'mcp-test-project' repository saying 'Hello World'." - "Search for the 3 latest issues in the
langchain-ai/langchainrepository."
How it works (Under the hood)
- User Input: You ask the agent a question via the Streamlit UI.
- System Prompt & Schema: The Python client fetches the available tools from the local Node.js MCP Server and passes them to LLaMA-3 as a JSON schema.
- LLM Reasoning: The model analyzes the request and generates an XML-formatted
<tool_call>block containing the exact tool name and required arguments. - Execution: Python parses the XML, triggers the local MCP server via asynchronous pipes (
stdio), and the server securely communicates with the GitHub API. - Analysis & Response: The raw JSON result from GitHub is fed back into the LLM, which translates it into a clean, human-readable response displayed in the chat.