MCP Servers

A collection of Model Context Protocol servers, templates, tools and more.

P
Pydata Boston MCP Talk

MCP server by FirefoxMetzger

Created 10/10/2025
Updated 2 months ago
Repository documentation and setup instructions

PyData Boston MCP Talk Examples

This repo contains the code examples I showed during my talk on MCP at the PyData Boston Meetup on 2025-10-08.

Setup

Install dependencies: uv sync.

Then create a .env file with an OpenAI API key in it:

OPENAI_API_KEY=<your-key-here>

Overview

Example 1

A minimal MCP server and client. This is meant as "hello world" of MCPs.

Run it from the example_1/ folder using

uv run python client.py

Example 2

A minimal example of all client features (elicitation, roots, sampling). It is meant to demo how to rig a server and client in order to make these features work at a basic level.

Run it from the example_2/ folder using

uv run python client.py

Example 3

An example of how to use prompts, resources, and tools. It also features elicitation and roots. This example is a bit more sophisticated and closer to the real world in order to showcase how complexity scales as you begin doing meaningful things with MCP.

Run it from the example_3/ folder using

uv run python client.py

The theme is an application that helps you co-write LinkedIn posts. It can draft them for you, but you are the one providing content and doing revision.

The example will ask you about a premise and ending of the post and set up a new "project folder" in the ./posts directory. It then launches into a simple chat loop in your terminal that allows you to talk to a vanilla LLM. The loop has no knowledge of the project initially. You can trigger more sophisticated interaction using the following commands:

  • using a "/" command will temporarily change the system prompt and start an agentic trace (see client.py lines 86f)
    • /context will trigger the LLM to ask you questions about the content of the post and populate the chat history.
    • /outline will use your chat history to draft bullet-point outline for the post in ./posts/<project>/outline.md.
    • /prose will use the chat history and outline to draft a post in ./posts/<project>/prose.md.
    • Upon completion of the agentic trace, the system prompt is reset to the default OpenAI system prompt.
  • the "@" keyword will trigger prompts (see client.py lines 105f).
    • @brainstorm will be replaced with the prompt in .src/prompts/brainstorm.md.
    • @reflection will be replaced with the prompt in .src/prompts/reflection.md.
    • @reminder will build a prompt that contains the current project context and "remind" the LLM of the current progress.
    • Each keyword will be replaced, meaning you can write a prompt before or after, e.g. @brainstorm We need to come up with a better hook.
  • the LLM has access to two tools that it can call at it's own discression:
    • write_file This tool writes a outline.md or prose.md file to the current project directory.
    • add_details This tool allows the agent to as you a question. It looks like an inverted "normal" chat turn but goes through MCP elicitation. The reason for this is that a "clarification tool call" does not interrupt the agentic trace, which (a) allows the LLM to continue it's tasks after it got the answer, and (b) preserves thinking tokens of the trace instead of destroying them like a turn would. A "turn" is when the trace finishes with the question as an "assistant" message, the user sends a "user" message, and the LLM starts a new trace with the previous assistant and user response in the conversation history.

The way you would use this demo is that you would:

  • start by responding to the premise and ending prompts during startup. You are only asked if those files don't exist in the project.
  • next you would do a /context run to start chatting with the agent about the post.
  • soon after you would do an /outline "run" to summarize the discussion into an outline.
  • You would read what the agent produced. (It is likely mediocre.)
  • You would use @brainstorm and @reflection to refine the content of the outline.
  • During the process you would flush/persist the discussion using /outline.
  • Eventually you would call /prose to create a written out version of the post.
  • Again, the initial draft will be mediocre, so you refine using the prompts.

If you need to restart the discussion you can use /reset to wipe the conversation history or restart the program. In this case you would likely start with @reminder to add the current version of the outline and prose back into the conversation.

Quick Setup
Installation guide for this server

Install Package (if required)

uvx pydata-boston-mcp-talk

Cursor configuration (mcp.json)

{ "mcpServers": { "firefoxmetzger-pydata-boston-mcp-talk": { "command": "uvx", "args": [ "pydata-boston-mcp-talk" ] } } }