Skip to content

Configuration Reference

ra uses a layered config system: CLI > config file > defaults. Each layer overrides the one to its right.

CLI flags > config file > defaults

Commit a ra.config.yml for a team or project baseline. Use ${VAR} interpolation for secrets and per-environment settings. Use CLI flags for one-off overrides.

Config file

Place in your project root. Supports JSON, YAML, or TOML.

  • ra.config.json
  • ra.config.yaml / ra.config.yml
  • ra.config.toml

Config is organized into two sections: app (application infrastructure) and agent (LLM behavior).

Full example:

yaml
# ra.config.yml
app:
  dataDir: .ra              # root for all runtime data

  providers:
    anthropic:
      apiKey: ${ANTHROPIC_API_KEY}    # resolved from env at load time

  storage:
    maxSessions: 100
    ttlDays: 30

  mcpServers:
    - name: github
      transport: stdio
      command: npx
      args: ["-y", "@modelcontextprotocol/server-github"]
      env:
        GITHUB_PERSONAL_ACCESS_TOKEN: "${GITHUB_TOKEN:-}"

agent:
  provider: ${PROVIDER:-anthropic}     # env override with default
  model: ${MODEL:-claude-sonnet-4-6}
  systemPrompt: You are a helpful coding assistant.
  maxIterations: 0               # 0 = unlimited
  thinking: adaptive
  toolTimeout: 120000
  parallelToolCalls: true       # run tool calls concurrently (default)
  maxTokenBudget: 0             # 0 = unlimited, or set e.g. 200000
  maxDuration: 0                # 0 = unlimited, or set e.g. 300000 (5 min)

  skillDirs:
    - ./skills

  compaction:
    enabled: true
    threshold: 0.90
    strategy: truncate           # or 'summarize'

  context:
    enabled: true
    patterns:
      - "CLAUDE.md"
      - "AGENTS.md"

  tools:
    builtin: true
    # Per-tool overrides (optional)
    # Agent:
    #   maxConcurrency: 2

  middleware:
    beforeModelCall:
      - "./middleware/budget.ts"
    afterToolExecution:
      - "./middleware/audit.ts"

  memory:
    enabled: true
    maxMemories: 1000
    ttlDays: 90
    injectLimit: 5

All fields

Agent — Core

FieldCLI flagDefaultDescription
agent.provider--provideranthropicLLM provider
agent.model--modelprovider defaultModel name
agent.systemPrompt--system-promptSystem prompt text
agent.maxIterations--max-iterations0 (unlimited)Max agent loop iterations (0 = unlimited)
agent.thinking--thinkingoffThinking mode: off, low, medium, high, adaptive
agent.thinkingBudgetCap--thinking-budget-capMax thinking budget tokens (caps the level-based default)
agent.toolTimeout120000Per-tool and middleware timeout (ms)
agent.parallelToolCallstrueExecute tool calls in parallel when the model returns multiple
agent.maxTokenBudget--max-token-budget0Max total tokens (input + output) before the loop stops. 0 = unlimited
agent.maxDuration--max-duration0Max wall-clock duration (ms) before the loop stops. 0 = unlimited
agent.tools.builtin--tools-builtintrueEnable/disable built-in tools

Agent — Permissions

Regex-based rules controlling what tools can do. See the Permissions guide for full details and examples.

FieldCLI flagDefaultDescription
agent.permissions.no_rules_rulesfalseDisable all permission checks
agent.permissions.default_actionallowAction when no rule matches: allow or deny
agent.permissions.rules[]Array of per-tool regex rules
yaml
agent:
  permissions:
    rules:
      - tool: Bash
        command:
          allow: ["^git ", "^bun "]
          deny: ["--force", "--hard"]
      - tool: Write
        path:
          allow: ["^src/", "^tests/"]

Agent — Skills

FieldCLI flagDefaultDescription
agent.skillDirs--skill-dir['.claude/skills', ...]Directories to scan for skills

Agent — Compaction

FieldCLI flagDefaultDescription
agent.compaction.enabledtrueEnable automatic context compaction
agent.compaction.threshold0.90Trigger at this fraction of context window
agent.compaction.strategy'truncate''truncate' drops old messages (free, cache-friendly); 'summarize' calls a model with metadata enrichment
agent.compaction.modelprovider defaultModel for summarization (only used with strategy: 'summarize')
agent.compaction.promptbuilt-inCustom summarization prompt. When set, bypasses metadata enrichment and uses the LLM response as-is

Agent — Context

FieldCLI flagDefaultDescription
agent.context.enabledtrueEnable context file discovery
agent.context.patterns['CLAUDE.md', 'AGENTS.md', '.cursorrules', '.windsurfrules', '.github/copilot-instructions.md']Glob patterns for context files
agent.context.resolversbuilt-inPattern resolvers for @file and url:

Agent — Tools

The agent.tools section controls which built-in tools are registered and their per-tool settings. See Built-in Tools for full details.

yaml
agent:
  tools:
    builtin: true            # master switch (default: true)
    custom:                   # load tools from files
      - "./tools/deploy.ts"
      - "./tools/db-query.ts"
      - "./tools/health-check.sh"  # shell scripts auto-detected
    Read:
      rootDir: "./src"        # constrain reads to this directory
    Write:
      rootDir: "./src"
    WebFetch:
      enabled: false          # disable a specific tool
    Agent:
      maxConcurrency: 2       # limit parallel subagent tasks
FieldTypeDefaultDescription
agent.tools.builtinbooleantrueMaster switch: register all built-in tools unless individually disabled
agent.tools.customstring[][]File paths to custom tool files (JS/TS/shell scripts)
agent.tools.<ToolName>.enabledbooleantrueEnable or disable a specific tool
agent.tools.<ToolName>.rootDirstringRestrict filesystem tools to this directory
agent.tools.<ToolName>.maxConcurrencynumber4Max parallel tasks (Agent tool)

Agent — Subagent

The Agent tool forks parallel copies of the agent. Forks inherit the parent's model, system prompt, tools, thinking level, and maxIterations. Concurrency can be set via agent.tools.Agent.maxConcurrency (see above) or the top-level agent.maxConcurrency as a fallback.

FieldCLI flagDefaultDescription
agent.maxConcurrency4Fallback max parallel subagent tasks (overridden by agent.tools.Agent.maxConcurrency)

App — Data directory

FieldCLI flagDefaultDescription
app.dataDir--data-dir.raRoot directory for all runtime data (sessions, memory, etc.)

All runtime data is organized under dataDir: sessions in {dataDir}/sessions/, memory in {dataDir}/memory.db.

App — Storage

FieldCLI flagDefaultDescription
app.storage.maxSessions--storage-max-sessions100Max sessions before auto-pruning
app.storage.ttlDays--storage-ttl-days30Auto-expire sessions older than this

Agent — Memory

FieldCLI flagDefaultDescription
agent.memory.enabled--memoryfalseEnable persistent memory
agent.memory.maxMemories1000Max stored memories (oldest trimmed)
agent.memory.ttlDays90Auto-prune memories older than this
agent.memory.injectLimit5Memories to inject as context per loop (0 to disable)

App — Observability

FieldDefaultDescription
app.logsEnabledtrueEnable session logs
app.logLevelinfoMinimum log level: debug, info, warn, error
app.tracesEnabledtrueEnable session traces

App — MCP Client

FieldCLI flagDefaultDescription
app.mcpServers[]External MCP servers to connect to
app.mcpLazySchemastrueLazy schema loading — register MCP tools with minimal schemas. First call returns full schema; model retries with correct params.

See MCP for details.

App — MCP Server (ra as MCP tool)

FieldCLI flagDefaultDescription
app.raMcpServer.enabled--mcp-server-enabledfalseEnable ra's MCP server endpoint
app.raMcpServer.port--mcp-server-port3001MCP server port
app.raMcpServer.tool.name--mcp-server-tool-nameraTool name exposed to MCP clients
app.raMcpServer.tool.description--mcp-server-tool-descriptionRa AI agentTool description exposed to MCP clients

See MCP for details.

App — HTTP

FieldCLI flagDefaultDescription
--httpStart HTTP server
app.http.port--http-port3000Server port
app.http.token--http-tokenBearer token for authentication

Cron

Define scheduled agent jobs. Only used when --interface cron.

yaml
cron:
  - name: daily-report
    schedule: "0 9 * * 1-5"
    prompt: "Summarize yesterday's git activity"
  - name: health-check
    schedule: "*/30 * * * *"
    prompt: "Check API health"
    agent:
      model: claude-haiku-4-5-20251001
      maxIterations: 5
FieldRequiredDescription
cron[].nameyesHuman-readable job name (used in logs and traces)
cron[].scheduleyesStandard cron expression
cron[].promptyesPrompt sent to the agent on each run
cron[].agentnoPer-job agent overrides (object) or path to a recipe YAML file (string)

See Cron for details.

App — Interface

FieldCLI flagDefaultDescription
--interfaceautocli, repl, http, cron
--mcp-stdioStart as MCP server (stdio)
--mcpStart as MCP server (HTTP)
--resumeResume the latest session (or --resume=<id> for a specific one)
--fileAttach files to the prompt
--execRun a script file
--show-configShow resolved configuration and exit
--configPath to config file

Environment variable interpolation

Config files and defaults support Docker Compose–style ${VAR} interpolation. Three forms are supported:

SyntaxBehavior
${VAR}Required — errors if not set
${VAR:-default}Use default if unset or empty
${VAR-default}Use default if unset (empty string is kept)

Interpolation runs on both the config file and the built-in defaults, so standard provider env vars work out of the box:

bash
# These are resolved by the defaults — no config file needed
export ANTHROPIC_API_KEY=sk-...
export OPENAI_API_KEY=sk-...
export GOOGLE_API_KEY=...
export OLLAMA_HOST=http://localhost:11434
export AWS_REGION=us-east-1
export AZURE_OPENAI_ENDPOINT=https://myresource.openai.azure.com/
export AZURE_OPENAI_DEPLOYMENT=my-gpt4o
export AZURE_OPENAI_API_KEY=...

To make any config field env-driven, use ${} in your config file:

yaml
agent:
  provider: ${PROVIDER:-anthropic}
  model: ${MODEL:-claude-sonnet-4-6}
  maxIterations: ${MAX_ITERS:-50}     # coerced to number automatically
app:
  http:
    token: ${HTTP_TOKEN:-}

String values produced by ${} are automatically coerced to match the expected type (number, boolean) based on the schema.

CLI flags

CLI flags override everything. Use them for one-off runs.

bash
ra --provider openai \
   --model gpt-4.1 \
   --system-prompt "Be concise" \
   --max-iterations 10 \
   --thinking high \
   --skill code-review \
   --file context.md \
   "Review this code"

Provider credentials

Provider API keys are resolved from standard environment variables by default. No RA_ prefix needed.

ProviderEnv var(s)Docs
AnthropicANTHROPIC_API_KEYSetup
OpenAIOPENAI_API_KEYSetup
GoogleGOOGLE_API_KEYSetup
AzureAZURE_OPENAI_ENDPOINT, AZURE_OPENAI_DEPLOYMENT, AZURE_OPENAI_API_KEYSetup
BedrockAWS_REGIONSetup
OllamaOLLAMA_HOSTSetup

Inspect

Use --show-config to print the fully resolved configuration as JSON and exit. Useful for debugging config layering — shows the final result after merging defaults, config file, and CLI flags. Sensitive values (tokens, API keys) are redacted.

bash
ra --show-config
ra --show-config --provider openai --model gpt-4.1
ra --show-context   # print discovered context files

See also

  • Context Control — compaction, thinking, and pattern resolution details
  • Sessions — session storage and resume
  • Middleware — middleware configuration
  • MCP — MCP client and server configuration

Released under the MIT License.