Skip to main content
Using an AI coding tool? Skip to Agentic Install (MCP) to integrate Convoy with a single prompt.

Convoy

Convoy is a phased rollout platform for AI agents. Any change to a model, prompt, tool, workflow step, or sandbox can be tested against real user traffic. Convoy routes a small slice of requests to the new version, evaluates the results, and automatically promotes or rolls back the change.

Integration

Every AI agent you want to test on Convoy must be triggered by an API call. Each endpoint is a testable unit — whether it’s a microservice API, a route in a monolith, or any unit you want to roll out end-to-end. There are two integration points:

How it works

  1. Your client sends a request to the Convoy proxy instead of directly to your agent
  2. Convoy routes it to the correct version (stable or test) and forwards it to your agent backend
  3. Your agent verifies the request came from Convoy, processes it, and reports metrics (latency, outcome, cost, input/output) back to Convoy
  4. Convoy uses these metrics — along with an LLM judge — to automatically promote or roll back test versions

Before you start

In the Convoy platform, create an agent to get:
  • Proxy URL — where your client sends requests (e.g. acme--chatbot.proxy.convoylabs.com)
  • Shared secret — used by both the client (as a bearer token) and the agent (for signature verification and metric reporting)
  • Stable URL — your production agent backend
  • Testing URL — your test environment for the new version of your agent
Both your stable and testing environments must be running and reachable. Convoy routes traffic to both — stable serves your current production version, and the testing URL serves the new version you want to evaluate. Make sure the test environment is up before deploying a new version.

Agentic Install (MCP)

These docs are available as an MCP server. Connect it to your AI coding tool and let your agent integrate Convoy for you.
claude mcp add --transport http convoy https://docs.convoylabs.com/mcp
Once connected, give your agent a prompt like this to get started:
Example prompt
Use the Convoy MCP to understand how to integrate Convoy into the AI agent in
this repo. First, learn how Convoy works and what integration steps are needed.
Then explore the relevant parts of my codebase and verify the details you need.
Before building the metric reporting logic, read the session ingest endpoint
docs carefully — field requirements change based on outcome and whether it's the
last step.
If anything is unclear, ask me before proceeding. Examples of things
you may need to clarify:
- Where does my client code live and where is my AI agent defined? Are they in the same repo or separate ones?
- Where are env vars and config managed?
- What should input and output contain, what should be excluded, and what context belongs in the judge prompt on the Convoy platform instead?
- How should outcome be determined, and what metrics do I already collect?
- Does my agent need to support streaming responses through the proxy?
- Is my agent already exposed via an HTTP route, or does a new route need to be created?
- The metric reporting call must include retries with exponential backoff — use the pattern from the session ingest code examples.
- Convoy uses a Session-ID header for sticky routing to the same version during rollout — does this agent handle multi-request sessions?
- The Convoy route must reject requests that fail signature verification. Should this route only accept Convoy traffic, or do we also need our own auth for direct callers?
- Does my deployment have any access controls that would block requests from the Convoy proxy, and do I need to add any to ensure only the proxy can reach the agent endpoint?

Once done, let me know what manual steps I need to complete in the Convoy
platform to finish the integration.
Questions? dolev@convoylabs.com · +1 (646) 685-7181