MCP Servers

模型上下文协议服务器、框架、SDK 和模板的综合目录。

A
Ai Books MCP Server
作者 @Tryboy869

🚀 AI Books MCP Server - Extend LLM context 15-60× via gravitational memory | Official MCP server for Claude Code & Anthropic

创建于 2/11/2026
更新于 13 days ago
Repository documentation and setup instructions

AI Books MCP Server

Universal LLM Context Extension via Gravitational Memory Compression

License: MIT MCP

Extend any LLM's context window by 15-60× while maintaining 100% data integrity. Built on quantum-inspired gravitational memory compression.

🚀 Features

  • Massive Context Extension: Extend LLM context 15-60× beyond native limits
  • 100% Data Integrity: Cryptographic hash verification ensures perfect accuracy
  • Universal Compatibility: Works with Claude, GPT-4, Llama, and any LLM
  • Zero Configuration: Works out of the box with Claude Code
  • Lightning Fast: Query libraries in milliseconds
  • Memory Efficient: Compression ratios up to 1240× on dense technical content

📦 Installation

For Claude Code Users

npm install -g ai-books-mcp-server

Then add to your Claude Code MCP settings:

{
  "mcpServers": {
    "ai-books": {
      "command": "ai-books-mcp-server"
    }
  }
}

For Developers

git clone https://github.com/TryBoy869/ai-books-mcp-server.git
cd ai-books-mcp-server
npm install
npm run build

🎯 Use Cases

1. Large Codebases

Create library from 100+ files → Query specific functionality → Get precise answers

2. Research Papers

Compress 50 papers → Ask synthesis questions → Get citations + insights

3. Documentation

Load entire docs → Natural language queries → Contextual answers

4. Books & Long-form Content

Compress novels/textbooks → Ask thematic questions → Deep analysis

🛠️ Available Tools

Core Tools

create_knowledge_library

Creates a compressed knowledge library from text.

{
  name: "react-docs",
  text: "...full React documentation...",
  n_max: 15  // Optional: compression level (5-20)
}

query_knowledge_library

Queries a library and retrieves relevant context.

{
  library_name: "react-docs",
  query: "How do hooks work?",
  top_k: 8  // Optional: number of chunks (1-20)
}

extend_context_from_files

Loads files and retrieves relevant context in one step.

{
  file_paths: ["./src/*.ts"],
  query: "Explain the authentication flow",
  top_k: 8
}

Management Tools

  • list_knowledge_libraries: List all libraries
  • get_library_stats: Detailed statistics
  • delete_knowledge_library: Remove a library
  • verify_library_integrity: Check 100% integrity
  • search_documents: Search with relevance scores

📖 Example Usage

In Claude Code

User: Can you help me understand this React codebase?

Claude: [Calls create_knowledge_library with all React files]
        [Creates library "react-project" with 245 chunks, 45× compression]
        
User: How does the authentication system work?

Claude: [Calls query_knowledge_library]
        [Retrieves 8 most relevant chunks from authentication code]
        [Provides detailed explanation with exact code references]

Result

Instead of:

  • ❌ "I can only see a few files at once"
  • ❌ "The codebase is too large for my context"

You get:

  • ✅ Full understanding of 100+ file codebases
  • ✅ Accurate answers with specific code references
  • ✅ Synthesis across multiple files

🧬 How It Works

Gravitational Memory Compression

Based on quantum physics' atomic orbital structure:

  1. Text Chunking: Split documents into 200-300 word chunks
  2. Hash Generation: SHA-256 hash for each chunk
  3. Orbital Encoding: Map hash to gravitational states (quantum-inspired)
  4. Compression: Achieve 15-60× reduction while maintaining retrievability
  5. Verification: 100% integrity guaranteed via hash comparison

Technical Details

  • Algorithm: Gravitational bit encoding with n_max orbitals
  • Compression: 1240 discrete states per bit (n_max=15)
  • Retrieval: O(N) semantic similarity + O(1) hash lookup
  • Integrity: Cryptographic verification (SHA-256)

📊 Performance

| Metric | Value | |--------|-------| | Compression Ratio | 15-60× (typical) | | Data Integrity | 100% guaranteed | | Query Speed | < 100ms (1000 chunks) | | Max Library Size | Limited by RAM | | Chunk Retrieval | O(N) similarity scan |

🎓 Created By

Daouda Abdoul Anzize

  • Self-taught Systems Architect
  • 40+ Open Source Projects
  • Specialization: Meta-architectures & Protocol Design

Portfolio: tryboy869.github.io/daa
GitHub: @TryBoy869
Email: anzizdaouda0@gmail.com

📄 License

MIT License - See LICENSE file

🤝 Contributing

Contributions welcome! Please:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing)
  5. Open a Pull Request

🐛 Issues

Found a bug? Have a feature request?

Open an issue

🌟 Star History

If you find this useful, please star the repo! ⭐

🔗 Links


Built with ❤️ by Daouda Anzize | Extending LLM horizons, one library at a time

快速设置
此服务器的安装指南

安装包 (如果需要)

npx @modelcontextprotocol/server-ai-books-mcp-server

Cursor 配置 (mcp.json)

{ "mcpServers": { "tryboy869-ai-books-mcp-server": { "command": "npx", "args": [ "tryboy869-ai-books-mcp-server" ] } } }