Configuration

All configuration options for Iron Rain, from file discovery to every field.

Setup wizard

The easiest way to create your config is to run iron-rain without a config file — the onboarding wizard walks you through provider selection, API key entry, and model slot assignment, then writes iron-rain.json for you. You can also edit your config later from the Settings screen inside the TUI.

Config file discovery

Iron Rain searches for a config file starting in the current directory, walking up to your home directory. It looks for these filenames in order:

  1. iron-rain.json
  2. iron-rain.jsonc (supports comments)
  3. .iron-rainrc.json

The first file found wins. You can place it in your project root for per-project config, or in ~ for a global default.

Full config reference

{
  // Model slot assignments (required for multi-model)
  "slots": {
    "main":    { "provider": "anthropic", "model": "claude-opus-4-6" },
    "explore": { "provider": "ollama",    "model": "qwen2.5-coder:32b" },
    "execute": { "provider": "openai",    "model": "gpt-4o" }
  },

  // Provider credentials
  "providers": {
    "anthropic": { "apiKey": "env:ANTHROPIC_API_KEY" },
    "openai":    { "apiKey": "env:OPENAI_API_KEY" },
    "ollama":    { "apiBase": "http://localhost:11434" }
  },

  // Tool permissions
  "permission": {
    "*":    "allow",
    "bash": "ask",
    "edit": "ask"
  },

  // Thinking levels per slot
  // "off" | "low" | "medium" | "high"

  // Active agent profile
  "agent": "build",

  // Auto-update settings
  "updates": {
    "autoCheck": true,
    "channel": "stable"
  },

  // LCM (context management) settings
  "lcm": {
    "enabled": true,
    "episodes": {
      "maxEpisodeTokens": 4000
    }
  },

  // MCP servers
  "mcpServers": {
    "my-server": { "command": "node", "args": ["./server.js"] }
  },

  // Skill discovery
  "skills": {
    "autoDiscover": true,
    "paths": ["./custom-skills"]
  },

  // Auto-commit after plan tasks
  "autoCommit": {
    "enabled": false,
    "messagePrefix": "iron-rain:"
  },

  // Sandbox for command execution
  "sandbox": {
    "backend": "none",
    "allowNetwork": false
  },

  // Project rules injection
  "rules": { "disabled": false },

  // Session persistence
  "session": { "autoResume": true, "maxHistory": 50 },

  // Memory / lessons
  "memory": { "autoLearn": true, "maxLessons": 50 },

  // Repo map generation
  "repoMap": { "enabled": true, "maxTokens": 2000 },

  // Cost tracking overrides
  "costs": {
    "custom-model": { "input": 0.001, "output": 0.002 }
  },

  // Remote config URL
  "configUrl": "https://example.com/iron-rain.json",

  // TUI theme
  "theme": "default"
}

Slots

Each slot routes a category of work to a specific model. There are three slots:

Slot Purpose Routed tool types
Cortex (main) Strategy, planning, conversation strategy, plan, conversation
Scout (explore) Search, read, research grep, glob, read, search
Forge (execute) Write code, run commands edit, write, bash

Slot config fields

FieldTypeRequiredDescription
providerstringYesProvider name (see Providers)
modelstringYesModel identifier for the provider
apiKeystringNoOverride the provider-level API key
apiBasestringNoOverride the provider-level API base URL
thinkingLevelstringNoReasoning depth: "off", "low", "medium", or "high"
systemPromptstringNoCustom system prompt for this slot

Thinking levels

Set thinkingLevel per slot to control reasoning depth. Each provider maps this to its native parameter:

LevelAnthropic (budget_tokens)OpenAI (reasoning_effort)Gemini (thinkingBudget)
offdisableddisableddisabled
low4,096low4,096
medium16,384medium12,288
high32,768high24,576

You can also change thinking levels at runtime from the Settings screen.

Providers

The providers object maps provider names to their credentials. These are shared across all slots that reference the provider.

FieldTypeDescription
apiKeystringAPI key. Use "env:VAR_NAME" to read from environment.
apiBasestring (URL)Base URL for the API endpoint.
Environment variables

Use the "env:VAR_NAME" syntax to keep secrets out of your config file. Iron Rain resolves these at runtime. Example: "apiKey": "env:ANTHROPIC_API_KEY"

Permissions

Control which tools require confirmation before running:

ValueBehavior
"allow"Execute without confirmation
"ask"Prompt the user before executing
"deny"Block the tool entirely

Use "*" as a wildcard key for the default policy, then override specific tools.

LCM (Context Management)

FieldTypeDefaultDescription
lcm.enabledbooleantrueEnable episode tracking and context management
lcm.episodes.maxEpisodeTokensnumber4000Maximum tokens per episode summary

MCP Servers

Iron Rain can connect to MCP (Model Context Protocol) servers. Tool descriptions from connected servers are injected into the system prompt automatically.

{
  "mcpServers": {
    "my-server": {
      "command": "node",
      "args": ["./mcp-server.js"]
    }
  }
}
FieldTypeDescription
commandstringCommand to launch the MCP server
argsstring[]Arguments to pass to the command

Connections are established at startup (non-blocking). Tool descriptions are merged into the system prompt for every dispatch.

Skills

Iron Rain discovers and loads skill files from your project. Skills provide slash commands that extend the TUI.

{
  "skills": {
    "autoDiscover": true,
    "paths": ["./custom-skills"]
  }
}
FieldTypeDefaultDescription
skills.autoDiscoverbooleantrueAutomatically discover skill files in standard locations
skills.pathsstring[][]Additional directories to search for skill files

Auto-Commit

Automatically commit changes after each plan task completes.

FieldTypeDefaultDescription
autoCommit.enabledbooleanfalseEnable auto-commit after plan tasks
autoCommit.messagePrefixstring"iron-rain:"Prefix for auto-commit messages

Sandbox

Run bash commands in an isolated sandbox to prevent unintended side effects.

FieldTypeDefaultDescription
sandbox.backendstring"none"Sandbox backend: "none", "seatbelt" (macOS), "docker", or "gvisor"
sandbox.allowNetworkbooleanfalseAllow network access inside the sandbox
sandbox.allowedWritePathsstring[][]Paths the sandbox can write to
sandbox.docker.imagestring"node:20-slim"Docker image (when backend is "docker")
sandbox.docker.memoryLimitstring"2g"Memory limit for Docker container
sandbox.docker.cpuLimitstring"2"CPU limit for Docker container

Rules

Project rules are injected into every system prompt. Iron Rain looks for rules in IRON-RAIN.md, CLAUDE.md, and .iron-rain/rules/*.md.

FieldTypeDefaultDescription
rules.pathsstring[][]Additional paths to load rules from
rules.disabledbooleanfalseDisable rules injection entirely

Session

Control session persistence behavior.

FieldTypeDefaultDescription
session.autoResumebooleantrueAuto-resume the last session on startup
session.maxHistorynumber50Maximum messages to retain in history

Memory

Configure the persistent lesson system (~/.iron-rain/sessions.db).

FieldTypeDefaultDescription
memory.autoLearnbooleantrueAutomatically extract lessons from conversations
memory.maxLessonsnumber50Maximum stored lessons (oldest are pruned)

Repo Map

Auto-generated repository map injected into system prompts for structural awareness.

FieldTypeDefaultDescription
repoMap.enabledbooleantrueEnable repo map generation
repoMap.maxTokensnumber2000Maximum token budget for the repo map

The repo map walks the directory tree (respecting .ironrainignore and .gitignore), extracts file symbols (exports, functions, classes), and truncates to the token budget.

Costs

Override or add custom model pricing for cost tracking. Keys are model names, values specify per-token cost in dollars.

{
  "costs": {
    "my-custom-model": { "input": 0.001, "output": 0.002 },
    "claude-opus-4-6": { "input": 0.015, "output": 0.075 }
  }
}

Built-in models have default pricing. Use this section to override or add models not in the registry.

Plugins

Load plugins that hook into the dispatch lifecycle.

FieldTypeDefaultDescription
plugins.pathsstring[][]Paths to plugin files or directories
plugins.hooksobject{}Map of hook name to script path for inline hooks

See the Plugin SDK for details on creating plugins.

Context

Fine-tune context management behavior.

FieldTypeDefaultDescription
context.hotWindowSizenumber6Number of recent messages kept verbatim
context.maxContextTokensnumber8000Maximum token budget for context
context.maxFileSizenumber102400Max file size for @ references (bytes)
context.maxImageSizenumber20971520Max image size for @ references (bytes)
context.toolOutputMaxTokensnumber2000Max tokens for tool output in context

Resilience

Configure retry and circuit breaker behavior for API calls.

FieldTypeDefaultDescription
resilience.maxRetriesnumber3Maximum retries on transient failures (429, 5xx)
resilience.circuitBreakerThresholdnumber5Consecutive failures before circuit opens
resilience.circuitBreakerResetMsnumber60000Time before circuit breaker resets (ms)

LSP (experimental)

Connect Language Server Protocol servers for enhanced code intelligence.

FieldTypeDefaultDescription
lsp.enabledbooleanfalseEnable LSP integration
lsp.serversobject{}Map of language to LSP server config
{
  "lsp": {
    "enabled": true,
    "servers": {
      "typescript": { "command": "typescript-language-server", "args": ["--stdio"] }
    }
  }
}

Voice (experimental)

Enable voice input for hands-free interaction.

FieldTypeDefaultDescription
voice.enabledbooleanfalseEnable voice input
voice.enginestring"system"Speech engine: "whisper" or "system"
voice.whisperModelstringWhisper model name (when engine is "whisper")

Remote Config

Fetch and merge a remote config at startup. Useful for team-wide settings.

{
  "configUrl": "https://example.com/team-iron-rain.json"
}

The remote config is deep-merged with the local config. Local values take precedence. The fetch runs at startup with a 5-second timeout.

Other fields

FieldTypeDefaultDescription
agentstring"build"Active agent profile name
themestring"default"TUI color theme