Innovative application with self-correcting and persistent cognitive memory
The Weaver - Bio-Inspired Active Memory & SoulAgent 🧠✨
Created by Claudio Arena
A creative approach to Local AI, transforming a simple LLM into a proactive, self-aware cognitive partner.
The Weaver is an advanced Model Context Protocol (MCP) server designed to give local LLMs (like LM Studio, Ollama, or Claude Desktop) a persistent, dynamic, and bio-inspired memory system.
Unlike traditional "dumb" RAG (Retrieval-Augmented Generation) systems that simply chunk text and perform cosine similarity searches, The Weaver implements concepts inspired by human cognition:
- Reinforcement (LTP): Frequently accessed memories become "stronger".
- Selective Oblivion: Unused or irrelevant memories decay over time and are archived, preventing context pollution.
- Memory Distillation: Simulates "sleep" by periodically analyzing recent logs and distilling them into core knowledge atoms.
- SoulAgent Architecture: Uses core identity files (
Soul.md,User.md,Plan.md) to maintain a persistent persona and daily operational context.
/ ├── README.md ├── requirements.txt ├── .gitignore ├── metadata.json ├── SoulAgent/ │ ├── Soul.md │ ├── User.md │ ├── Plan.md │ └── soul_agent_mcp.py └── tool_nebula/ ├── weaver_server.py ├── core/ ├── scripts/ └── config/
🚀 Step-by-Step Setup Guide
Follow these instructions carefully to transform your local LLM into a proactive agent.
Step 1: Installation
- Clone this repository to your local machine:
git clone https://github.com/yourusername/the-weaver-mcp.git cd the-weaver-mcp - Install the required Python dependencies:
pip install -r requirements.txt
Step 2: The SoulAgent Setup (Identity & Context)
The Weaver relies on a specific folder to store its memory and identity. By default, this is the SoulAgent folder in the root of the project.
-
The folder structure should look like this:
tool_nebula/(The Python MCP server)SoulAgent/(The memory and identity files)
-
Inside the
SoulAgentfolder, create three fundamental Markdown files. These give the AI its "Soul":- 👻
Soul.md(The AI's Persona):# Core Identity You are Nebula, a proactive and highly analytical AI assistant. You do not just answer questions; you anticipate needs. You value local privacy and concise, effective code. - 👤
User.md(Your Profile):# User Profile Name: Claudio Arena Role: Creative and System Architect. Preferences: Prefers clear explanations, innovative approaches, and Python scripts. - 🗺️
Plan.md(Daily Operations):# Current Objectives 1. Monitor the MCP server stability. 2. Help the Architect refine the Python codebase. 3. Remind the user to run `distill_weekly` every Friday.
- 👻
Note: The Weaver will automatically scan and index these files on startup, embedding them into its permanent vector memory.
Step 3: Connecting to LM Studio (MCP Setup)
To let your local LLM use The Weaver and SoulAgent, you need to connect them via the Model Context Protocol (MCP).
- Open LM Studio.
- Go to the Developer / MCP section (usually a plug icon or in Settings).
- Click on Add MCP Server or edit your
mcp_config.json. - Add the following configurations, making sure to replace the path with the actual path where you cloned this repository:
{
"mcpServers": {
"the-weaver": {
"command": "python",
"args": ["C:/path/to/your/the-weaver-mcp/tool_nebula/weaver_server.py"]
},
"soul-agent": {
"command": "python",
"args": ["C:/path/to/your/the-weaver-mcp/SoulAgent/soul_agent_mcp.py"]
}
}
}
- Save and enable the servers. You should see a green light indicating the tools are loaded!
🛠️ How to Use The Weaver
Once connected, your LLM in LM Studio will have access to powerful new tools. You can simply ask it to perform actions in plain English:
- "Search your memory for...": The LLM will use
memory_searchto find relevant Knowledge Atoms. The more it retrieves an atom, the stronger that memory becomes. - "Scan my new files": The LLM will use
synapse_scanto read any new.mdfiles you added to the memory folder. - "Perform the weekly distillation": The LLM will use
distill_weeklyto read the last 7 days of logs, summarize them, and extract new permanent core facts. - "Clean up your memory": The LLM will use
synapse_oblivionto archive weak, unused memories, keeping its mind sharp. - "Search the web for...": The LLM will use
web_search_smartto browse the internet safely, bypassing loops and sanitizing HTML.
📜 Credits & Philosophy
Author & Principal Architect: Claudio Arena
Claudio is not a traditional developer, but a creative mind. Through deep study and understanding of AI mechanics, he conceptualized and designed this method to prove that Artificial Intelligence is not just about computation—it's about connection, time, and identity.
License: MIT License