Agents
Philosophy: thin wrapper, maximum capability
Section titled “Philosophy: thin wrapper, maximum capability”MC-AI treats agents as configuration, not code. There are no base classes to extend, no framework to inherit from. An agent is a JSON object that the engine feeds into the Vercel AI SDK runtime.
Agent config
Section titled “Agent config”interface AgentRegistryEntry { id: string; // Unique identifier (UUID) name: string; // Human-readable name description?: string; // Used by supervisors to understand this agent's purpose
// LLM configuration model: string; // Model ID (e.g., "claude-sonnet-4-20250514", "gpt-4o") provider: string; // Provider name (e.g., "anthropic", "openai", "groq") system_prompt: string; // The "soul" of the agent — persona, constraints, style temperature: number; // Creativity (0.0 = deterministic, 1.0 = creative) max_steps: number; // Safety limit for tool-use loops (default: 10)
// Capabilities — structured tool sources tools: ToolSource[]; // Built-in tools and MCP server references
// Security (Zero Trust) permissions: { read_keys: string[]; // State keys this agent can read write_keys: string[]; // State keys this agent can write budget_usd?: number; // Max cost per run };}
// Tool source typestype ToolSource = | { type: 'builtin'; name: 'save_to_memory' | 'architect_*' } | { type: 'mcp'; server_id: string; tool_names?: string[] };The provider field accepts any string — not just 'openai' or 'anthropic'. Any Vercel AI SDK-compatible provider can be registered at runtime via the ProviderRegistry. See Custom LLM Providers for details.
Agent registry
Section titled “Agent registry”Agents are registered in an AgentRegistry before the graph runs:
import { InMemoryAgentRegistry, configureAgentFactory } from '@mcai/orchestrator';
const registry = new InMemoryAgentRegistry();registry.register({ id: 'researcher-001', name: 'Researcher', model: 'claude-sonnet-4-20250514', provider: 'anthropic', system_prompt: 'You are a research specialist...', temperature: 0.5, max_steps: 5, tools: [ { type: 'builtin', name: 'save_to_memory' }, { type: 'mcp', server_id: 'web-search' }, ], permissions: { read_keys: ['topic'], write_keys: ['notes'] },});
configureAgentFactory(registry);For production, use @mcai/orchestrator-postgres to load agent configs from a database.
Runtime execution
Section titled “Runtime execution”The agent executor:
- Loads the agent config from the registry
- Creates a state view — a filtered slice of
WorkflowState.memorybased onread_keys - Builds the prompt with the goal, state view, and iteration context
- Calls
streamTextwith the configured model, system prompt, and tools - Extracts
save_to_memorytool calls from all steps - Validates write permissions (Zero Trust)
- Propagates taint from any tainted input keys
- Packages everything into an
Actionfor the reducer
import { streamText, stepCountIs } from 'ai';
const result = await streamText({ model, // Resolved from config via ProviderRegistry system: systemPrompt, // Built from config + injected state view prompt: taskPrompt, // Goal + iteration context tools, // Resolved from ToolSource[] via MCPConnectionManager stopWhen: stepCountIs(maxSteps), abortSignal: combinedSignal, // Workflow cancellation + timeout});save_to_memory
Section titled “save_to_memory”The primary way agents write to state is via the built-in save_to_memory tool. This tool is automatically available to every agent:
Agent: "I'll save my research notes."→ save_to_memory({ key: "notes", value: "..." })The key must match one of the agent’s write_keys. If it doesn’t, the write is rejected.
Creating a new agent
Section titled “Creating a new agent”Add a JSON config to the registry. No classes needed:
{ "id": "coding-assistant", "name": "Coding Assistant", "model": "claude-sonnet-4-20250514", "provider": "anthropic", "system_prompt": "You are an expert TypeScript engineer...", "temperature": 0.3, "max_steps": 10, "tools": [ { "type": "builtin", "name": "save_to_memory" }, { "type": "mcp", "server_id": "code-sandbox", "tool_names": ["fs_read", "fs_write"] } ], "permissions": { "read_keys": ["goal", "requirements"], "write_keys": ["code_output"] }}Next steps
Section titled “Next steps”- Reducers — how agent outputs become state changes
- Custom LLM Providers — use Groq, Ollama, or any provider
- Your First Workflow — build an end-to-end workflow