MCP server

One-prompt setup from Cursor, Claude Code, or any MCP client.

rendfly will expose an MCP server so your coding agent can wire up monitoring for you with a single prompt — no manual dashboard steps, no copy-pasting keys, no YAML configuration.

What MCP is

Model Context Protocol is Anthropic’s open standard for connecting AI agents to external tools and data sources. It defines how an agent (running in Cursor, Claude Code, or another MCP client) discovers and calls external capabilities — think of it as a typed API that coding agents can use directly. The canonical reference is modelcontextprotocol.io.

The standard is gaining fast adoption across AI development tooling. If you’re already using an MCP-aware IDE, you’re one config entry away from having your coding agent talk to rendfly directly.

The intended one-prompt setup

Once the rendfly MCP server ships, connecting monitoring to a new project will look like this from Cursor or Claude Code:

“Set up rendfly monitoring for this project. The system message is in prompts/system.md.”

The MCP server handles auth (or prompts for a project key), reads the system message from your repo, creates the rendfly project, and wires the provider connection — all without you opening the rendfly dashboard. The MCP server translates the prompt into the same sequence of API calls you’d make manually, but the coding agent drives it.

This is still the intended UX, not a shipped workflow. The exact prompt phrasing may change.

Tools the server will expose

Four tools are planned for the initial release:

  • rendfly_create_project — creates a rendfly project, optionally reading your agent’s system message from a file path in your repo and running the initial rule extraction pass.
  • rendfly_link_provider — wires your existing provider key into proxy mode, configuring the base_url update in whichever file your agent’s HTTP client is initialized.
  • rendfly_list_alerts — pulls your project’s alert history into your IDE context so you can ask questions like “what rule failed most this week?” without leaving the editor.
  • rendfly_inspect_conversation — fetches a single conversation by ID, including its per-rule judge scores, for debugging sessions when an alert fires and you want to understand why.

Why this is a bonus, not a primary path

The MCP integration is genuinely useful, but it’s a DX layer on top of the core ingestion paths — not a replacement for them. A few reasons:

Production agents don’t run through your IDE. Your WhatsApp support bot, your e-commerce live chat, your Slack help assistant — these are server processes or serverless functions, not coding-agent sessions. They connect through proxy mode, API mode, or the SDK wrapper. The MCP server helps you set up that connection; it doesn’t replace it.

Not everyone uses an MCP client yet. Proxy and API mode work from any HTTP client in any language. The MCP path assumes Cursor, Claude Code, or an equivalent. That’s a reasonable assumption for developers actively building agents, but it’s not universal.

Use the MCP server if it makes setup faster. Plan on proxy or API mode doing the actual monitoring work.

  • Proxy mode — the primary ingestion path that the MCP server will help you configure.
Updated 2026-05-09