Codebuff

Overview

Why Multi-Agent Systems Work Better

Codebuff uses specialized agents that collaborate instead of one agent doing everything. Agents spawn other agents, share tools, and pass context between tasks. Here are some of the sub-agents Codebuff uses:

  • Code Generation - Write clean, functional code
  • Review - Catch bugs, security issues, style violations
  • Research - Find documentation and examples
  • Planning - Break down complex requirements
  • File Discovery - Navigate large codebases

What Makes Codebuff Agents Unique?

Codebuff agents can be programmatically controlled using TypeScript generator functions. You can write actual code to orchestrate complex workflows, make decisions based on file contents, and add in determinism as you see fit. Instead of hoping an LLM understands your instructions you can guarantee specific behavior.

Built-in Agents

  • codebuff/base - Main coding assistant
  • codebuff/reviewer - Code review
  • codebuff/thinker - Deep thinking
  • codebuff/researcher - Research & docs
  • codebuff/planner - Planning & architecture
  • codebuff/file-picker - File discovery

Agent Workflow

A typical call to Codebuff may result in the following flow:

mermaid diagram
Rendering diagram...

Example: Authentication System Refactoring

If you say "refactor this authentication system", Codebuff might break down the task into the following steps:

  1. File Picker finds auth-related files
  2. Research looks up best practices
  3. Planning creates step-by-step plan
  4. Base implements changes informed by the previous agents
  5. Reviewer checks for security issues

Agent Coordination

Agents coordinate through the spawnerPrompt field, which helps other agents understand when and why to spawn them. This creates intelligent workflows where:

  • Specialized agents are spawned for specific tasks
  • Each agent clearly describes its purpose and capabilities
  • The system automatically matches tasks to the right agents

Agents can spawn other agents listed in their spawnableAgents field, creating a hierarchy of specialized helpers.

Quick Start

  1. Customize existing agents - Modify prompts and tools
  2. Create new agents - Build specialized functionality
  3. Reference guide - Complete field documentation

Customizing Agents

Create specialized agents from scratch using TypeScript files in the .agents/ directory:

markdown
.agents/
├── my-custom-agent.ts
├── security-coordinator.ts
└── types/
    └── agent-definition.ts

Domain-Specific Customization

Agents adapt to your specific workflow and project needs.

Keep in mind you'll get the most value from agents if you treat them as context window managers. Design them to orchestrate and sequence work so the right files, facts, and decisions are in scope at each step. Break tasks into steps and build agents around controlling that flow instead of trying to replicate human specialties.

Tip: Use specialty reviewers as spawnable subagents that your context-manager agent calls at the right time in the workflow.

Reasoning Options (optional)

For models that support it, you can enable and tune reasoning directly on your agent:

ts
const definition = {
  id: 'reviewer',
  // ... other fields ...
  reasoningOptions: {
    enabled: true,      // turn on reasoning if supported
    exclude: false,     // include reasoning traces when available
    effort: 'medium',   // low | medium | high
  },
}
  • Effort roughly corresponds to a larger internal "max tokens for reasoning."
  • If the selected model doesn't support reasoning, these options are safely ignored.

Example: Security Coordinator Agent

Create a specialized agent for security-focused workflows:

.agents/security-coordinator.ts

typescript
import { AgentDefinition } from './types/agent-definition'

const definition: AgentDefinition = {
  id: "security-coordinator",
  version: "1.0.0",

  displayName: "Security Coordinator",
  spawnerPrompt: "Spawn this agent to coordinate security-focused development workflows and ensure secure coding practices",
  model: "anthropic/claude-4-sonnet-20250522",
  outputMode: "last_message",
  includeMessageHistory: true,

  toolNames: ["read_files", "spawn_agents", "code_search", "end_turn"],
  spawnableAgents: ["codebuff/reviewer@0.0.1", "codebuff/researcher@0.0.1", "codebuff/file-picker@0.0.1"],

  inputSchema: {
    prompt: {
      type: "string",
      description: "Security analysis or coordination task"
    }
  },

  systemPrompt: "You are a security coordinator responsible for ensuring secure development practices.",
  instructionsPrompt: "Analyze the security implications of the request and coordinate appropriate security-focused agents.",
  stepPrompt: "Continue analyzing security requirements and coordinating the workflow. Use end_turn when complete."
}

export default definition

Advanced: Adding Programmatic Control

Make your agent even more powerful with programmatic control:

typescript
const definition: AgentDefinition = {
  id: "security-coordinator",
  // ... other fields ...

  handleSteps: function* ({ prompt, params }) {
    // 1. Scan for security vulnerabilities
    const { toolResult: scanResults } = yield {
      toolName: 'code_search',
      input: {
        pattern: '(eval|exec|dangerouslySetInnerHTML|process\.env)',
        flags: '-i'
      }
    }

    // 2. If vulnerabilities found, spawn security reviewer
    if (scanResults) {
      yield {
        toolName: 'spawn_agents',
        input: {
          agents: [{
            agent_type: 'security-reviewer',
            prompt: `Review these potential security issues: ${scanResults}`
          }]
        }
      }
    }

    // 3. Let the agent handle the rest
    yield 'STEP_ALL'
  }
}

With handleSteps, your agent can:

  • Make decisions based on actual code analysis
  • Orchestrate multiple tools in sequence
  • Handle errors gracefully
  • Implement complex conditional logic

Available Fields

Core:

  • id
  • displayName
  • model
  • version
  • publisher

Tools:

  • toolNames
  • spawnableAgents

Prompts:

  • spawnerPrompt
  • systemPrompt
  • instructionsPrompt
  • stepPrompt

Input/Output:

  • inputSchema
  • outputMode
  • outputSchema
  • includeMessageHistory

Programmatic:

  • handleSteps

Troubleshooting

Agent not loading: Check TypeScript syntax, file must export default AgentDefinition

Type errors: Import types from './types/agent-definition'

Prompts not applying: Verify file paths are relative to .agents/ directory

Running specific agents:

  1. Check TypeScript: bun run typecheck in .agents/ directory
  2. Restart Codebuff to see errors
  3. Test with --agent <agent-id> to debug specific agents

Tip: Use specialty reviewers as spawnable subagents that your context-manager agent calls at the right time in the workflow.

Creating New Agents

Create specialized agents from scratch using TypeScript files in the .agents/ directory.

Types:

  • LLM-based - Use prompts and language models
  • Programmatic - Use TypeScript generator functions with handleSteps

Control Flow:

  • yield 'STEP' - Run one LLM generation step
  • yield 'STEP_ALL' - Run until completion
  • return - End the agent's turn

Accessing Context:

  • agentState - Current agent state and message history
  • prompt - User's prompt to the agent
  • params - Additional parameters passed to the agent

Basic Structure

Create a new TypeScript file in .agents/ directory:

.agents/my-custom-agent.ts

typescript
import { AgentDefinition } from './types/agent-definition'

const definition: AgentDefinition = {
  id: "my-custom-agent",
  version: "1.0.0",


  displayName: "My Custom Agent",
  spawnerPrompt: "Spawn this agent for specialized workflow tasks requiring custom logic",
  model: "anthropic/claude-4-sonnet-20250522",
  outputMode: "last_message",
  includeMessageHistory: true,
  toolNames: ["read_files", "write_file", "end_turn"],
  spawnableAgents: ["codebuff/researcher@0.0.1"],  // Use full name for built-in agents

  inputSchema: {
    prompt: {
      type: "string",
      description: "What documentation to create or update"
    }
  },

  systemPrompt: `You are a documentation specialist.`,
  instructionsPrompt: "Create comprehensive documentation based on the user's request. Research existing code and patterns first.",
  stepPrompt: "Continue working on the documentation. Use end_turn when complete."
}

export default definition

Domain-Specific Examples

API Documentation Agent

Specialized for documenting REST APIs and GraphQL schemas:

.agents/api-documenter.ts

typescript
import { AgentDefinition } from './types/agent-definition'

const definition: AgentDefinition = {
  id: "api-documenter",
  version: "1.0.0",


  displayName: "API Documentation Specialist",
  spawnerPrompt: "Spawn this agent to create comprehensive API documentation with examples, schemas, and error codes",
  model: "anthropic/claude-4-sonnet-20250522",
  outputMode: "last_message",
  includeMessageHistory: true,

  toolNames: ["read_files", "code_search", "write_file", "spawn_agents", "end_turn"],
  spawnableAgents: ["codebuff/researcher@0.0.1"],  // Use full name for built-in agents

  inputSchema: {
    prompt: {
      type: "string",
      description: "What API endpoints or schemas to document"
    }
  },

  systemPrompt: "You are an API documentation specialist. Create clear, comprehensive documentation for REST APIs and GraphQL schemas with examples, request/response formats, and error codes.",
  instructionsPrompt: "Analyze the specified API endpoints and create detailed documentation including examples, parameters, and response schemas.",
  stepPrompt: "Continue documenting the API. Include practical examples and edge cases. Use end_turn when complete."
}

export default definition

Database Migration Agent

Specialized for creating and reviewing database migrations:

.agents/migration-specialist.ts

typescript
import { AgentDefinition } from './types/agent-definition'

const definition: AgentDefinition = {
  id: "migration-specialist",
  version: "1.0.0",


  displayName: "Database Migration Specialist",
  spawnerPrompt: "Spawn this agent to create safe, reversible database migrations with proper indexing and rollback procedures",
  model: "anthropic/claude-4-sonnet-20250522",
  outputMode: "last_message",
  includeMessageHistory: true,

  toolNames: [
    "read_files",
    "write_file",
    "code_search",
    "run_terminal_command",
    "end_turn"
  ],
  spawnableAgents: ["codebuff/reviewer@0.0.1"],

  systemPrompt: "You are a database migration specialist. Your goal is to create safe, reversible database migrations with proper indexing and rollback procedures.",
  instructionsPrompt: "Create a database migration for the requested schema changes. Ensure it's reversible and includes proper indexing.",
  stepPrompt: "Continue working on the migration. Test it if possible and spawn a reviewer to check for issues."
}

export default definition

Programmatic Agents (Advanced)

🎯 This is where Codebuff agents become truly powerful! While LLM-based agents work well for many tasks, programmatic agents give you precise control over complex workflows, while still letting you tap into LLMs when you want them.

Why Use Programmatic Agents?

  • Deterministic workflows - Guarantee specific steps happen in order
  • Dynamic decision making - Branch based on your own logic
  • Complex orchestration - Coordinate multiple agents and tools with logic
  • State management - Maintain state across multiple agent steps

How It Works

Use TypeScript generator functions with the handleSteps field to control execution:

.agents/code-analyzer.ts

typescript
import { AgentDefinition } from './types/agent-definition'

const definition: AgentDefinition = {
  id: "code-analyzer",
  displayName: "Code Analysis Expert",
  spawnerPrompt: "Spawn for deep code analysis and refactoring suggestions",
  model: "anthropic/claude-4-sonnet-20250522",

  toolNames: ["read_files", "code_search", "spawn_agents", "write_file"],
  spawnableAgents: ["codebuff/thinker@0.0.1", "codebuff/reviewer@0.0.1"],

  handleSteps: function* ({ agentState, prompt, params }) {
    // First, find relevant files
    const { toolResult: files } = yield {
      toolName: 'find_files',
      input: { query: prompt }
    }

    // Read the most important files
    if (files) {
      const filePaths = JSON.parse(files).slice(0, 5)
      yield {
        toolName: 'read_files',
        input: { paths: filePaths }
      }
    }

    // Spawn a thinker for deep analysis
    yield {
      toolName: 'spawn_agents',
      input: {
        agents: [{
          agent_type: 'thinker',
          prompt: `Analyze the code structure and suggest improvements for: ${prompt}`
        }]
      }
    }

    // Let the agent generate its response
    yield 'STEP_ALL'
  }
}

export default definition

Key Concepts for Programmatic Agents

1. Generator Function Basics

Your handleSteps function receives context and yields actions:

typescript
handleSteps: function* ({ agentState, prompt, params }) {
  // agentState: Current conversation and agent state
  // prompt: What the user asked this agent to do
  // params: Additional parameters passed to the agent

  // Your logic here...
}

2. Yielding Tool Calls

Execute tools and get their results:

typescript
const { toolResult, toolError } = yield {
  toolName: 'read_files',
  input: { paths: ['file1.ts', 'file2.ts'] }
}

if (toolError) {
  // Handle error case
  console.error('Failed to read files:', toolError)
} else {
  // Use the result
  const fileContent = JSON.parse(toolResult)
}

3. Control Flow Options

Control Flow:

  • yield 'STEP' - Run one LLM generation step
  • yield 'STEP_ALL' - Run until completion
  • return - End the agent's turn

4. Advanced Example: Conditional Workflow

typescript
handleSteps: function* ({ agentState, prompt, params }) {
  // Step 1: Analyze the codebase
  const { toolResult: analysis } = yield {
    toolName: 'spawn_agents',
    input: {
      agents: [{
        agent_type: 'thinker',
        prompt: `Analyze: ${prompt}`
      }]
    }
  }

  // Step 2: Based on analysis, choose action
  if (analysis?.includes('refactor')) {
    // Get all files that need refactoring
    const { toolResult: files } = yield {
      toolName: 'find_files',
      input: { query: 'needs refactoring' }
    }

    // Step 3: Refactor each file
    for (const file of JSON.parse(files || '[]')) {
      yield {
        toolName: 'write_file',
        input: {
          path: file,
          instructions: 'Refactor for better performance',
          content: '// ... refactored code ...'
        }
      }
    }
  }

  // Step 4: Final review
  yield {
    toolName: 'spawn_agents',
    input: {
      agents: [{
        agent_type: 'reviewer',
        prompt: 'Review all changes'
      }]
    }
  }

  // Let the agent summarize
  yield 'STEP_ALL'
}

When to Choose Programmatic vs LLM-based

Use Programmatic (handleSteps) when:

  • You need guaranteed execution order
  • Decisions depend on specific file contents
  • Complex multi-step workflows with branching
  • Integration with external systems
  • Error recovery is critical

Use LLM-based (prompts only) when:

  • Task is straightforward
  • Agent needs creative freedom
  • Natural language understanding is key
  • Workflow is simple and linear

Agent Reference

Complete reference for all agent configuration fields and tools.

Key Terms

Agent Template: JSON file defining agent behavior

Spawnable Agents: Sub-agents this agent can create

Tool Names: Capabilities (read files, run commands, etc.)

Output Mode: Response format (last message, report, all messages)

Prompt Schema: Input validation rules

Agent Configuration

When creating agent templates, you define all aspects of the agent from scratch.

Agent Schema

json
{
  "type": "object",
  "properties": {
    "id": {
      "type": "string",
      "pattern": "^[a-z0-9-]+$"
    },
    "version": {
      "type": "string"
    },
    "publisher": {
      "type": "string"
    },
    "displayName": {
      "type": "string"
    },
    "model": {
      "type": "string"
    },
    "reasoningOptions": {
      "allOf": [
        {
          "type": "object",
          "properties": {
            "enabled": {
  // ... truncated

Core Configuration

id (string, required)

Unique identifier for this agent. Must contain only lowercase letters, numbers, and hyphens.

json
"id": "code-reviewer"

displayName (string, required)

Human-readable name for the agent.

json
"displayName": "Code Review Specialist"

spawnerPrompt (string, optional)

Prompt for when and why to spawn this agent. Include the main purpose and use cases. This field is key if the agent is intended to be spawned by other agents.

json
"spawnerPrompt": "Spawn this agent for thorough code review, focusing on bugs, security issues, and best practices"

Model Configuration

model (string, required)

The model to use, which can be any model string from Openrouter.

json
"model": "anthropic/claude-4-sonnet-20250522"

Reasoning Options (reasoningOptions, object, optional)

Controls model reasoning behavior using OpenRouter-style settings.

Fields:

  • enabled (boolean, default: false) — Turn reasoning mode on for supported models.
  • exclude (boolean, default: false) — If true, omit model-revealed reasoning content from responses (when available), returning only the final answer.
  • effort ("low" | "medium" | "high") — Increase or decrease how much the model "thinks" before answering. Higher effort typically improves quality for hard tasks at the cost of more reasoning tokens.

Notes:

  • Patterned after OpenRouter's "max tokens for reasoning." Higher effort maps to a higher internal reasoning token budget. See OpenRouter docs for background.
  • Only supported by models that expose reasoning features. When unsupported, these options are ignored.

Example:

ts
// .agents/thinker.ts
const definition = {
  id: 'thinker',
  // ... other fields ...
  reasoningOptions: {
    enabled: true,
    exclude: false,
    effort: 'high',
  },
}

Behavior Configuration

outputMode (string, optional, default: "last_message")

How the agent's output is handled.

Options:

  • "last_message" - Return only the final message (default)
  • "all_messages" - Return all messages from the conversation
  • "structured_output" - Return a structured JSON object (use with outputSchema)
json
"outputMode": "last_message"

includeMessageHistory (boolean, optional, default: false)

Whether to include conversation history from the parent agent when spawning this agent.

json
"includeMessageHistory": true

outputSchema (object, optional)

JSON Schema for structured output (when outputMode is "structured_output"). Defines the expected shape of the JSON object the agent will return.

json
"outputMode": "structured_output",
"outputSchema": {
  "type": "object",
  "properties": {
    "summary": { "type": "string" },
    "issues": {
      "type": "array",
      "items": { "type": "string" }
    },
    "score": { "type": "number" }
  },
  "required": ["summary", "issues"]
}

Tools and Capabilities

toolNames (array, optional, default: ["end_turn"])

List of tools the agent can use.

Available Tools:

  • add_subgoal - Create subgoals for tracking progress
  • browser_logs - Navigate web pages and get console logs
  • code_search - Search for patterns in code files
  • create_plan - Generate detailed plans for complex tasks
  • end_turn - End the agent's turn
  • find_files - Find relevant files in the codebase
  • read_docs - Read documentation for libraries
  • read_files - Read file contents
  • run_file_change_hooks - Run configured file change hooks
  • run_terminal_command - Execute terminal commands
  • spawn_agents - Spawn other agents
  • str_replace - Replace strings in files
  • think_deeply - Perform deep analysis
  • update_subgoal - Update existing subgoals
  • web_search - Search the web
  • write_file - Create or edit files
  • set_output - Set an output JSON object
json
"toolNames": ["read_files", "write_file", "code_search", "end_turn"]

spawnableAgents (array, optional, default: [])

Other agents this agent can spawn. Use the fully qualified agent ID from the agent store (publisher/name@version) or the agent ID from a local agent file.

⚠️ Important: When referencing built-in agents, you must specify both publisher and version (e.g., codebuff/reviewer@0.0.1). Omit publisher/version only for local agents defined in your .agents/ directory.

Referencing Agents:

  • Published/built-in agents: "codebuff/file-picker@0.0.1" (publisher and version required!)
  • Local agents: "my-custom-agent" (just the agent ID)

Available Built-in Agents:

  • codebuff/base - Main coding assistant
  • codebuff/reviewer - Code review agent
  • codebuff/thinker - Deep thinking agent
  • codebuff/researcher - Research and documentation agent
  • codebuff/planner - Planning and architecture agent
  • codebuff/file-picker - File discovery agent
json
"spawnableAgents": ["codebuff/researcher@0.0.1", "my-local-agent"]

Prompt Configuration

All prompt fields support two formats:

  1. Direct string content:
json
"systemPrompt": "You are a helpful assistant..."
  1. External file reference:
json
"systemPrompt": {
  "path": "./my-system-prompt.md"
}

Prompt Fields (all optional)

systemPrompt (string or object, optional)

Background information for the agent. Fairly optional - prefer using instructionsPrompt for agent instructions.

instructionsPrompt (string or object, optional)

Instructions for the agent. This is the best way to shape the agent's behavior and is inserted after each user input.

stepPrompt (string or object, optional)

Prompt inserted at each agent step. Powerful for changing behavior but usually not necessary for smart models.

Programmatic Control

handleSteps (generator function, optional)

🚀 This is what makes Codebuff agents truly powerful! Unlike traditional prompt-based agents, handleSteps lets you write actual code to control agent behavior.

Programmatically control the agent's execution using a TypeScript generator function. This enables:

  • Dynamic decision making based on tool results
  • Complex orchestration between multiple tools and agents
  • Conditional branching based on file contents or agent responses
  • Iterative refinement until desired results are achieved
  • State management across multiple steps

What You Can Yield:

Yield ValueWhat It Does
Tool call objectExecute a specific tool and get its result
'STEP'Run agent's LLM for one generation step
'STEP_ALL'Let agent run until completion
returnEnd the agent's turn immediately

Tool Call Pattern:

typescript
const { toolResult, toolError } = yield {
  toolName: 'read_files',
  input: { paths: ['file.ts'] }
}
// Now you can use toolResult to make decisions!

Example:

typescript
handleSteps: function* ({ agentState, prompt, params }) {
  // First, read some files
  const { toolResult } = yield {
    toolName: 'read_files',
    input: { paths: ['src/index.ts', 'src/config.ts'] }
  }
  // Then spawn a thinker agent
  yield {
    toolName: 'spawn_agents',
    input: {
      agents: [{
        agent_type: 'thinker',
        prompt: 'Analyze this code structure'
      }]
    }
  }

  // Let the agent take over from here
  yield 'STEP_ALL'
}

Schema Validation

inputSchema (object, optional)

JSON Schema definitions for validating prompt and params when spawning the agent.

json
"inputSchema": {
  "prompt": {
    "type": "string",
    "description": "What documentation to create"
  },
  "params": {
    "type": "object",
    "properties": {
      "format": {
        "type": "string",
        "enum": ["markdown", "html"]
      }

Real-World Example:

typescript
  // 1. Dynamically find relevant files
  const { toolResult: searchResults } = yield {
    toolName: 'code_search',
    input: { pattern: params.searchPattern || 'TODO' }
    }

  // 2. Parse results and decide what to read
  const files = JSON.parse(searchResults || '[]')
  if (files.length > 0) {
    const { toolResult: fileContents } = yield {
      toolName: 'read_files',
      input: { paths: files.slice(0, 10) }
  }

    // 3. Conditionally spawn different agents based on content
    if (fileContents?.includes('security')) {
      yield {
        toolName: 'spawn_agents',
        input: {
          agents: [{
            agent_type: 'security-reviewer',
            prompt: `Review security implications in: ${files.join(', ')}`
          }]
        }
      }
    }
}

  // 4. Let the LLM handle the rest with context
**Why This Matters:**
- Traditional agents rely solely on prompts and hope the LLM makes the right decisions
- With `handleSteps`, you have **deterministic control** over the agent's workflow
- You can implement complex logic that would be impossible with prompts alone
- Results from one tool directly inform the next action programmatically

### Agent Example
**.agents/documentation-writer.ts**
import { AgentDefinition } from './types/agent-definition'
const definition: AgentDefinition = {
  id: "documentation-writer",
  version: "1.0.0",
  publisher: "mycompany",

  displayName: "Documentation Writer",
  spawnerPrompt: "Spawn this agent for creating comprehensive documentation, API docs, or user guides",
  model: "anthropic/claude-4-sonnet-20250522",
  outputMode: "last_message",
  includeMessageHistory: true,

  toolNames: [
    "read_files",
    "write_file",
    "code_search",
    "spawn_agents",
    "end_turn"
  ],
  spawnableAgents: ["codebuff/researcher@0.0.1"],

  inputSchema: {
    prompt: {
      type: "string",
      description: "What documentation to create or update"
    }
  },

  systemPrompt: {
    path: "./prompts/doc-writer-system.md"
  },
  instructionsPrompt: "Create comprehensive documentation based on the user's request. Research existing code first.",
  stepPrompt: "Continue working on the documentation. Use end_turn when complete."
}

export default definition

Troubleshooting Agent Customization

Quick fixes for common agent customization issues.

Quick Fix Checklist

  1. Restart Codebuff to reload templates
  2. Check JSON syntax: cat your-agent-file.json | jq
  3. Verify file paths are relative to project root
  4. Ensure "override": true is set for overrides

Common Errors

"Agent not found"

text
Error: Agent 'my-custom-agent' not found

Fix: Check agent ID spelling, file location (.agents/templates/), JSON syntax (cat file.json | jq)

"Invalid spawnable agent"

text
Validation error: spawnableAgents contains invalid agent 'researcher-typo'

Fix: Check spelling against built-in agents list, use exact IDs

"Path not found" Error

text
Error: Cannot resolve prompt file './my-prompt.md'

Causes:

  • File doesn't exist at specified path
  • Incorrect relative path resolution
  • File permissions issue

Solutions:

  1. Use paths relative to project root: .agents/templates/my-prompt.md
  2. Verify file exists: ls -la .agents/templates/my-prompt.md
  3. Check file permissions are readable

JSON Schema Issues

Invalid Override Type

json
{
  "systemPrompt": {
    "type": "add", // ❌ Invalid
    "content": "..."
  }
}

Fix: Use valid override types:

json
{
  "systemPrompt": {
    "type": "append", // ✅ Valid: append, prepend, replace
    "content": "..."
  }
}

Missing Required Fields

json
{
  "id": "my-agent",
  "override": false,
  "displayName": "My Agent"
  // ❌ Missing required fields for new agents
}

Fix: Include all required fields for new agents:

json
{
  "id": "my-agent",
  "version": "1.0.0",
  "override": false,
  "displayName": "My Agent",
  "purpose": "Brief description of the agent's purpose",
  "model": "anthropic/claude-4-sonnet-20250522",
  "systemPrompt": "You are a helpful assistant...",
  "instructionsPrompt": "Process the user's request...",
  "stepPrompt": "Continue working on the task..."
}

"Path not found"

Fix: Use project root relative paths: .agents/templates/my-prompt.md, verify file exists

Agent Behavior Issues

Agent Not Using Custom Prompts

Symptoms:

  • Agent behaves like default version
  • Custom instructions ignored

Debug Steps:

  1. Check override is properly applied:
bash
# Restart Codebuff to reload templates
codebuff
  1. Verify override syntax:
json
{
  "id": "CodebuffAI/reviewer", // ✅ Exact match required
  "override": true, // ✅ Must be true for overrides
  "systemPrompt": {
    "type": "append", // ✅ Valid override type
    "content": "Custom instructions..."
  }
}

Agent Spawning Wrong Sub-agents

Symptoms:

  • Unexpected agents being created
  • Missing expected specialized agents

Solutions:

  1. Check spawnableAgents configuration:
json
{
  "spawnableAgents": {
    "type": "replace", // Use "replace" to override completely
    "content": ["researcher", "thinker"]
  }
}
  1. Verify agent names are correct (no typos)

Performance Issues

Agent Taking Too Long

Causes:

  • Complex prompts causing slow generation
  • Too many tools enabled
  • Large context from message history

Solutions:

  1. Simplify prompts and remove unnecessary instructions
  2. Limit toolNames to only required tools
  3. Set includeMessageHistory: false for stateless agents
  4. Use faster models for simple tasks:
json
{
  "model": "anthropic/claude-3-5-haiku-20241022" // Faster model
}

High Credit Usage

Causes:

  • Using expensive models unnecessarily
  • Agents spawning too many sub-agents
  • Large context windows

Solutions:

  1. Use cost-effective models:
json
{
  "model": "google/gemini-2.5-flash" // More economical
}
  1. Limit spawnable agents:
json
{
  "spawnableAgents": [] // Prevent sub-agent spawning
}

File Organization Issues

Templates Not Loading

Symptoms:

  • No custom agents available
  • Validation errors on startup

Debug Steps:

  1. Check directory structure:
markdown
your-project/
├── .agents/
│ ├── my-agent.json
│ └── my-prompts.md
  1. Verify file permissions:
bash
ls -la .agents/templates/
  1. Check for hidden characters or encoding issues:
bash
file .agents/templates/*.json

Best Practices for Debugging

1. Start Simple

Begin with minimal overrides and add complexity gradually:

json
{
  "id": "CodebuffAI/reviewer",
  "override": true,
  "model": "anthropic/claude-4-sonnet-20250522"
}

2. Use Validation Tools

  • JSON validator: cat file.json | jq
  • File existence: ls -la .agents/templates/
  • Syntax check: Most editors highlight JSON errors

3. Check Logs

Restart Codebuff to see validation errors:

bash
codebuff  # Look for error messages on startup

4. Test Incrementally

Add one override at a time to isolate issues:

  1. Test basic override (model change)
  2. Add simple prompt override
  3. Add external file reference
  4. Add tool modifications

5. Use Version Control

Track your agent templates in git to easily revert problematic changes:

bash
git add .agents/
git commit -m "Add custom reviewer agent"

Getting Help

If you're still experiencing issues:

  1. Check the logs: Look for specific error messages when starting Codebuff
  2. Simplify: Remove customizations until it works, then add back gradually
  3. Community: Join our Discord for real-time help
  4. Documentation: Review the Agent Reference for complete field descriptions

Quick Reference

Valid Override Types

  • "append" - Add to existing content
  • "prepend" - Add before existing content
  • "replace" - Replace entire content

Required Fields for New Agents

  • id, version, override: false
  • displayName, purpose, model
  • systemPrompt, instructionsPrompt, stepPrompt

Common File Paths

  • Agent templates: .agents/templates/*.json
  • External prompts: .agents/templates/*.md
  • Project root: ./ (for absolute paths)

Agents