Codebuff uses specialized agents that collaborate instead of one agent doing everything. Agents spawn other agents, share tools, and pass context between tasks. Here are some of the sub-agents Codebuff uses:
Codebuff agents can be programmatically controlled using TypeScript generator functions. You can write actual code to orchestrate complex workflows, make decisions based on file contents, and add in determinism as you see fit. Instead of hoping an LLM understands your instructions you can guarantee specific behavior.
Built-in Agents
codebuff/base - Main coding assistant
codebuff/reviewer - Code review
codebuff/thinker - Deep thinking
codebuff/researcher - Research & docs
codebuff/planner - Planning & architecture
codebuff/file-picker - File discovery
Agent Workflow
A typical call to Codebuff may result in the following flow:
mermaid diagram
Rendering diagram...
Example: Authentication System Refactoring
If you say "refactor this authentication system", Codebuff might break down the task into the following steps:
File Picker finds auth-related files
Research looks up best practices
Planning creates step-by-step plan
Base implements changes informed by the previous agents
Reviewer checks for security issues
Agent Coordination
Agents coordinate through the spawnerPrompt field, which helps other agents understand when and why to spawn them. This creates intelligent workflows where:
Specialized agents are spawned for specific tasks
Each agent clearly describes its purpose and capabilities
The system automatically matches tasks to the right agents
Agents can spawn other agents listed in their spawnableAgents field, creating a hierarchy of specialized helpers.
Agents adapt to your specific workflow and project needs.
Keep in mind you'll get the most value from agents if you treat them as context window managers. Design them to orchestrate and sequence work so the right files, facts, and decisions are in scope at each step. Break tasks into steps and build agents around controlling that flow instead of trying to replicate human specialties.
Tip: Use specialty reviewers as spawnable subagents that your context-manager agent calls at the right time in the workflow.
Reasoning Options (optional)
For models that support it, you can enable and tune reasoning directly on your agent:
ts
const definition = {
id: 'reviewer',
// ... other fields ...
reasoningOptions: {
enabled: true, // turn on reasoning if supported
exclude: false, // include reasoning traces when available
effort: 'medium', // low | medium | high
},
}
Effort roughly corresponds to a larger internal "max tokens for reasoning."
If the selected model doesn't support reasoning, these options are safely ignored.
Example: Security Coordinator Agent
Create a specialized agent for security-focused workflows:
.agents/security-coordinator.ts
typescript
import { AgentDefinition } from './types/agent-definition'
const definition: AgentDefinition = {
id: "security-coordinator",
version: "1.0.0",
displayName: "Security Coordinator",
spawnerPrompt: "Spawn this agent to coordinate security-focused development workflows and ensure secure coding practices",
model: "anthropic/claude-4-sonnet-20250522",
outputMode: "last_message",
includeMessageHistory: true,
toolNames: ["read_files", "spawn_agents", "code_search", "end_turn"],
spawnableAgents: ["codebuff/reviewer@0.0.1", "codebuff/researcher@0.0.1", "codebuff/file-picker@0.0.1"],
inputSchema: {
prompt: {
type: "string",
description: "Security analysis or coordination task"
}
},
systemPrompt: "You are a security coordinator responsible for ensuring secure development practices.",
instructionsPrompt: "Analyze the security implications of the request and coordinate appropriate security-focused agents.",
stepPrompt: "Continue analyzing security requirements and coordinating the workflow. Use end_turn when complete."
}
export default definition
Advanced: Adding Programmatic Control
Make your agent even more powerful with programmatic control:
Agent not loading: Check TypeScript syntax, file must export default AgentDefinition
Type errors: Import types from './types/agent-definition'
Prompts not applying: Verify file paths are relative to .agents/ directory
Running specific agents:
Check TypeScript: bun run typecheck in .agents/ directory
Restart Codebuff to see errors
Test with --agent <agent-id> to debug specific agents
Tip: Use specialty reviewers as spawnable subagents that your context-manager agent calls at the right time in the workflow.
Creating New Agents
Create specialized agents from scratch using TypeScript files in the .agents/ directory.
Types:
LLM-based - Use prompts and language models
Programmatic - Use TypeScript generator functions with handleSteps
Control Flow:
yield 'STEP' - Run one LLM generation step
yield 'STEP_ALL' - Run until completion
return - End the agent's turn
Accessing Context:
agentState - Current agent state and message history
prompt - User's prompt to the agent
params - Additional parameters passed to the agent
Basic Structure
Create a new TypeScript file in .agents/ directory:
.agents/my-custom-agent.ts
typescript
import { AgentDefinition } from './types/agent-definition'
const definition: AgentDefinition = {
id: "my-custom-agent",
version: "1.0.0",
displayName: "My Custom Agent",
spawnerPrompt: "Spawn this agent for specialized workflow tasks requiring custom logic",
model: "anthropic/claude-4-sonnet-20250522",
outputMode: "last_message",
includeMessageHistory: true,
toolNames: ["read_files", "write_file", "end_turn"],
spawnableAgents: ["codebuff/researcher@0.0.1"], // Use full name for built-in agents
inputSchema: {
prompt: {
type: "string",
description: "What documentation to create or update"
}
},
systemPrompt: `You are a documentation specialist.`,
instructionsPrompt: "Create comprehensive documentation based on the user's request. Research existing code and patterns first.",
stepPrompt: "Continue working on the documentation. Use end_turn when complete."
}
export default definition
Domain-Specific Examples
API Documentation Agent
Specialized for documenting REST APIs and GraphQL schemas:
.agents/api-documenter.ts
typescript
import { AgentDefinition } from './types/agent-definition'
const definition: AgentDefinition = {
id: "api-documenter",
version: "1.0.0",
displayName: "API Documentation Specialist",
spawnerPrompt: "Spawn this agent to create comprehensive API documentation with examples, schemas, and error codes",
model: "anthropic/claude-4-sonnet-20250522",
outputMode: "last_message",
includeMessageHistory: true,
toolNames: ["read_files", "code_search", "write_file", "spawn_agents", "end_turn"],
spawnableAgents: ["codebuff/researcher@0.0.1"], // Use full name for built-in agents
inputSchema: {
prompt: {
type: "string",
description: "What API endpoints or schemas to document"
}
},
systemPrompt: "You are an API documentation specialist. Create clear, comprehensive documentation for REST APIs and GraphQL schemas with examples, request/response formats, and error codes.",
instructionsPrompt: "Analyze the specified API endpoints and create detailed documentation including examples, parameters, and response schemas.",
stepPrompt: "Continue documenting the API. Include practical examples and edge cases. Use end_turn when complete."
}
export default definition
Database Migration Agent
Specialized for creating and reviewing database migrations:
.agents/migration-specialist.ts
typescript
import { AgentDefinition } from './types/agent-definition'
const definition: AgentDefinition = {
id: "migration-specialist",
version: "1.0.0",
displayName: "Database Migration Specialist",
spawnerPrompt: "Spawn this agent to create safe, reversible database migrations with proper indexing and rollback procedures",
model: "anthropic/claude-4-sonnet-20250522",
outputMode: "last_message",
includeMessageHistory: true,
toolNames: [
"read_files",
"write_file",
"code_search",
"run_terminal_command",
"end_turn"
],
spawnableAgents: ["codebuff/reviewer@0.0.1"],
systemPrompt: "You are a database migration specialist. Your goal is to create safe, reversible database migrations with proper indexing and rollback procedures.",
instructionsPrompt: "Create a database migration for the requested schema changes. Ensure it's reversible and includes proper indexing.",
stepPrompt: "Continue working on the migration. Test it if possible and spawn a reviewer to check for issues."
}
export default definition
Programmatic Agents (Advanced)
🎯 This is where Codebuff agents become truly powerful! While LLM-based agents work well for many tasks, programmatic agents give you precise control over complex workflows, while still letting you tap into LLMs when you want them.
Why Use Programmatic Agents?
Deterministic workflows - Guarantee specific steps happen in order
Dynamic decision making - Branch based on your own logic
Complex orchestration - Coordinate multiple agents and tools with logic
State management - Maintain state across multiple agent steps
How It Works
Use TypeScript generator functions with the handleSteps field to control execution:
.agents/code-analyzer.ts
typescript
import { AgentDefinition } from './types/agent-definition'
const definition: AgentDefinition = {
id: "code-analyzer",
displayName: "Code Analysis Expert",
spawnerPrompt: "Spawn for deep code analysis and refactoring suggestions",
model: "anthropic/claude-4-sonnet-20250522",
toolNames: ["read_files", "code_search", "spawn_agents", "write_file"],
spawnableAgents: ["codebuff/thinker@0.0.1", "codebuff/reviewer@0.0.1"],
handleSteps: function* ({ agentState, prompt, params }) {
// First, find relevant files
const { toolResult: files } = yield {
toolName: 'find_files',
input: { query: prompt }
}
// Read the most important files
if (files) {
const filePaths = JSON.parse(files).slice(0, 5)
yield {
toolName: 'read_files',
input: { paths: filePaths }
}
}
// Spawn a thinker for deep analysis
yield {
toolName: 'spawn_agents',
input: {
agents: [{
agent_type: 'thinker',
prompt: `Analyze the code structure and suggest improvements for: ${prompt}`
}]
}
}
// Let the agent generate its response
yield 'STEP_ALL'
}
}
export default definition
Key Concepts for Programmatic Agents
1. Generator Function Basics
Your handleSteps function receives context and yields actions:
typescript
handleSteps: function* ({ agentState, prompt, params }) {
// agentState: Current conversation and agent state
// prompt: What the user asked this agent to do
// params: Additional parameters passed to the agent
// Your logic here...
}
2. Yielding Tool Calls
Execute tools and get their results:
typescript
const { toolResult, toolError } = yield {
toolName: 'read_files',
input: { paths: ['file1.ts', 'file2.ts'] }
}
if (toolError) {
// Handle error case
console.error('Failed to read files:', toolError)
} else {
// Use the result
const fileContent = JSON.parse(toolResult)
}
3. Control Flow Options
Control Flow:
yield 'STEP' - Run one LLM generation step
yield 'STEP_ALL' - Run until completion
return - End the agent's turn
4. Advanced Example: Conditional Workflow
typescript
handleSteps: function* ({ agentState, prompt, params }) {
// Step 1: Analyze the codebase
const { toolResult: analysis } = yield {
toolName: 'spawn_agents',
input: {
agents: [{
agent_type: 'thinker',
prompt: `Analyze: ${prompt}`
}]
}
}
// Step 2: Based on analysis, choose action
if (analysis?.includes('refactor')) {
// Get all files that need refactoring
const { toolResult: files } = yield {
toolName: 'find_files',
input: { query: 'needs refactoring' }
}
// Step 3: Refactor each file
for (const file of JSON.parse(files || '[]')) {
yield {
toolName: 'write_file',
input: {
path: file,
instructions: 'Refactor for better performance',
content: '// ... refactored code ...'
}
}
}
}
// Step 4: Final review
yield {
toolName: 'spawn_agents',
input: {
agents: [{
agent_type: 'reviewer',
prompt: 'Review all changes'
}]
}
}
// Let the agent summarize
yield 'STEP_ALL'
}
When to Choose Programmatic vs LLM-based
Use Programmatic (handleSteps) when:
You need guaranteed execution order
Decisions depend on specific file contents
Complex multi-step workflows with branching
Integration with external systems
Error recovery is critical
Use LLM-based (prompts only) when:
Task is straightforward
Agent needs creative freedom
Natural language understanding is key
Workflow is simple and linear
Agent Reference
Complete reference for all agent configuration fields and tools.
Key Terms
Agent Template: JSON file defining agent behavior
Spawnable Agents: Sub-agents this agent can create
Tool Names: Capabilities (read files, run commands, etc.)
Output Mode: Response format (last message, report, all messages)
Prompt Schema: Input validation rules
Agent Configuration
When creating agent templates, you define all aspects of the agent from scratch.
Unique identifier for this agent. Must contain only lowercase letters, numbers, and hyphens.
json
"id": "code-reviewer"
displayName (string, required)
Human-readable name for the agent.
json
"displayName": "Code Review Specialist"
spawnerPrompt (string, optional)
Prompt for when and why to spawn this agent. Include the main purpose and use cases.
This field is key if the agent is intended to be spawned by other agents.
json
"spawnerPrompt": "Spawn this agent for thorough code review, focusing on bugs, security issues, and best practices"
Model Configuration
model (string, required)
The model to use, which can be any model string from Openrouter.
Controls model reasoning behavior using OpenRouter-style settings.
Fields:
enabled (boolean, default: false) — Turn reasoning mode on for supported models.
exclude (boolean, default: false) — If true, omit model-revealed reasoning content from responses (when available), returning only the final answer.
effort ("low" | "medium" | "high") — Increase or decrease how much the model "thinks" before answering. Higher effort typically improves quality for hard tasks at the cost of more reasoning tokens.
Notes:
Patterned after OpenRouter's "max tokens for reasoning." Higher effort maps to a higher internal reasoning token budget. See OpenRouter docs for background.
Only supported by models that expose reasoning features. When unsupported, these options are ignored.
Other agents this agent can spawn. Use the fully qualified agent ID from the agent store (publisher/name@version) or the agent ID from a local agent file.
⚠️ Important: When referencing built-in agents, you must specify both publisher and version (e.g., codebuff/reviewer@0.0.1). Omit publisher/version only for local agents defined in your .agents/ directory.
Referencing Agents:
Published/built-in agents: "codebuff/file-picker@0.0.1" (publisher and version required!)
Local agents: "my-custom-agent" (just the agent ID)
Available Built-in Agents:
codebuff/base - Main coding assistant
codebuff/reviewer - Code review agent
codebuff/thinker - Deep thinking agent
codebuff/researcher - Research and documentation agent
codebuff/planner - Planning and architecture agent
Background information for the agent. Fairly optional - prefer using instructionsPrompt for agent instructions.
instructionsPrompt (string or object, optional)
Instructions for the agent. This is the best way to shape the agent's behavior and is inserted after each user input.
stepPrompt (string or object, optional)
Prompt inserted at each agent step. Powerful for changing behavior but usually not necessary for smart models.
Programmatic Control
handleSteps (generator function, optional)
🚀 This is what makes Codebuff agents truly powerful! Unlike traditional prompt-based agents, handleSteps lets you write actual code to control agent behavior.
Programmatically control the agent's execution using a TypeScript generator function. This enables:
Dynamic decision making based on tool results
Complex orchestration between multiple tools and agents
Conditional branching based on file contents or agent responses
Iterative refinement until desired results are achieved
State management across multiple steps
What You Can Yield:
Yield Value
What It Does
Tool call object
Execute a specific tool and get its result
'STEP'
Run agent's LLM for one generation step
'STEP_ALL'
Let agent run until completion
return
End the agent's turn immediately
Tool Call Pattern:
typescript
const { toolResult, toolError } = yield {
toolName: 'read_files',
input: { paths: ['file.ts'] }
}
// Now you can use toolResult to make decisions!
Example:
typescript
handleSteps: function* ({ agentState, prompt, params }) {
// First, read some files
const { toolResult } = yield {
toolName: 'read_files',
input: { paths: ['src/index.ts', 'src/config.ts'] }
}
// Then spawn a thinker agent
yield {
toolName: 'spawn_agents',
input: {
agents: [{
agent_type: 'thinker',
prompt: 'Analyze this code structure'
}]
}
}
// Let the agent take over from here
yield 'STEP_ALL'
}
Schema Validation
inputSchema (object, optional)
JSON Schema definitions for validating prompt and params when spawning the agent.
// 1. Dynamically find relevant files
const { toolResult: searchResults } = yield {
toolName: 'code_search',
input: { pattern: params.searchPattern || 'TODO' }
}
// 2. Parse results and decide what to read
const files = JSON.parse(searchResults || '[]')
if (files.length > 0) {
const { toolResult: fileContents } = yield {
toolName: 'read_files',
input: { paths: files.slice(0, 10) }
}
// 3. Conditionally spawn different agents based on content
if (fileContents?.includes('security')) {
yield {
toolName: 'spawn_agents',
input: {
agents: [{
agent_type: 'security-reviewer',
prompt: `Review security implications in: ${files.join(', ')}`
}]
}
}
}
}
// 4. Let the LLM handle the rest with context
**Why This Matters:**
- Traditional agents rely solely on prompts and hope the LLM makes the right decisions
- With `handleSteps`, you have **deterministic control** over the agent's workflow
- You can implement complex logic that would be impossible with prompts alone
- Results from one tool directly inform the next action programmatically
### Agent Example
**.agents/documentation-writer.ts**
import { AgentDefinition } from './types/agent-definition'
const definition: AgentDefinition = {
id: "documentation-writer",
version: "1.0.0",
publisher: "mycompany",
displayName: "Documentation Writer",
spawnerPrompt: "Spawn this agent for creating comprehensive documentation, API docs, or user guides",
model: "anthropic/claude-4-sonnet-20250522",
outputMode: "last_message",
includeMessageHistory: true,
toolNames: [
"read_files",
"write_file",
"code_search",
"spawn_agents",
"end_turn"
],
spawnableAgents: ["codebuff/researcher@0.0.1"],
inputSchema: {
prompt: {
type: "string",
description: "What documentation to create or update"
}
},
systemPrompt: {
path: "./prompts/doc-writer-system.md"
},
instructionsPrompt: "Create comprehensive documentation based on the user's request. Research existing code first.",
stepPrompt: "Continue working on the documentation. Use end_turn when complete."
}
export default definition
Troubleshooting Agent Customization
Quick fixes for common agent customization issues.
{
"id": "my-agent",
"override": false,
"displayName": "My Agent"
// ❌ Missing required fields for new agents
}
Fix: Include all required fields for new agents:
json
{
"id": "my-agent",
"version": "1.0.0",
"override": false,
"displayName": "My Agent",
"purpose": "Brief description of the agent's purpose",
"model": "anthropic/claude-4-sonnet-20250522",
"systemPrompt": "You are a helpful assistant...",
"instructionsPrompt": "Process the user's request...",
"stepPrompt": "Continue working on the task..."
}
"Path not found"
Fix: Use project root relative paths: .agents/templates/my-prompt.md, verify file exists
Agent Behavior Issues
Agent Not Using Custom Prompts
Symptoms:
Agent behaves like default version
Custom instructions ignored
Debug Steps:
Check override is properly applied:
bash
# Restart Codebuff to reload templates
codebuff
Verify override syntax:
json
{
"id": "CodebuffAI/reviewer", // ✅ Exact match required
"override": true, // ✅ Must be true for overrides
"systemPrompt": {
"type": "append", // ✅ Valid override type
"content": "Custom instructions..."
}
}
Agent Spawning Wrong Sub-agents
Symptoms:
Unexpected agents being created
Missing expected specialized agents
Solutions:
Check spawnableAgents configuration:
json
{
"spawnableAgents": {
"type": "replace", // Use "replace" to override completely
"content": ["researcher", "thinker"]
}
}
Verify agent names are correct (no typos)
Performance Issues
Agent Taking Too Long
Causes:
Complex prompts causing slow generation
Too many tools enabled
Large context from message history
Solutions:
Simplify prompts and remove unnecessary instructions
Limit toolNames to only required tools
Set includeMessageHistory: false for stateless agents
Use faster models for simple tasks:
json
{
"model": "anthropic/claude-3-5-haiku-20241022" // Faster model
}
High Credit Usage
Causes:
Using expensive models unnecessarily
Agents spawning too many sub-agents
Large context windows
Solutions:
Use cost-effective models:
json
{
"model": "google/gemini-2.5-flash" // More economical
}