Configuration
All configuration options for Iron Rain, from file discovery to every field.
The easiest way to create your config is to run iron-rain without a config file — the onboarding wizard walks you through provider selection, API key entry, and model slot assignment, then writes iron-rain.json for you. You can also edit your config later from the Settings screen inside the TUI.
Config file discovery
Iron Rain searches for a config file starting in the current directory, walking up to your home directory. It looks for these filenames in order:
iron-rain.jsoniron-rain.jsonc(supports comments).iron-rainrc.json
The first file found wins. You can place it in your project root for per-project config, or in ~ for a global default.
Full config reference
{
// Model slot assignments (required for multi-model)
"slots": {
"main": { "provider": "anthropic", "model": "claude-opus-4-6" },
"explore": { "provider": "ollama", "model": "qwen2.5-coder:32b" },
"execute": { "provider": "openai", "model": "gpt-4o" }
},
// Provider credentials
"providers": {
"anthropic": { "apiKey": "env:ANTHROPIC_API_KEY" },
"openai": { "apiKey": "env:OPENAI_API_KEY" },
"ollama": { "apiBase": "http://localhost:11434" }
},
// Tool permissions
"permission": {
"*": "allow",
"bash": "ask",
"edit": "ask"
},
// Thinking levels per slot
// "off" | "low" | "medium" | "high"
// Active agent profile
"agent": "build",
// Auto-update settings
"updates": {
"autoCheck": true,
"channel": "stable"
},
// LCM (context management) settings
"lcm": {
"enabled": true,
"episodes": {
"maxEpisodeTokens": 4000
}
},
// MCP servers
"mcpServers": {
"my-server": { "command": "node", "args": ["./server.js"] }
},
// Skill discovery
"skills": {
"autoDiscover": true,
"paths": ["./custom-skills"]
},
// Auto-commit after plan tasks
"autoCommit": {
"enabled": false,
"messagePrefix": "iron-rain:"
},
// Sandbox for command execution
"sandbox": {
"backend": "none",
"allowNetwork": false
},
// Project rules injection
"rules": { "disabled": false },
// Session persistence
"session": { "autoResume": true, "maxHistory": 50 },
// Memory / lessons
"memory": { "autoLearn": true, "maxLessons": 50 },
// Repo map generation
"repoMap": { "enabled": true, "maxTokens": 2000 },
// Cost tracking overrides
"costs": {
"custom-model": { "input": 0.001, "output": 0.002 }
},
// Remote config URL
"configUrl": "https://example.com/iron-rain.json",
// TUI theme
"theme": "default"
}
Slots
Each slot routes a category of work to a specific model. There are three slots:
| Slot | Purpose | Routed tool types |
|---|---|---|
| Cortex (main) | Strategy, planning, conversation | strategy, plan, conversation |
| Scout (explore) | Search, read, research | grep, glob, read, search |
| Forge (execute) | Write code, run commands | edit, write, bash |
Slot config fields
| Field | Type | Required | Description |
|---|---|---|---|
provider | string | Yes | Provider name (see Providers) |
model | string | Yes | Model identifier for the provider |
apiKey | string | No | Override the provider-level API key |
apiBase | string | No | Override the provider-level API base URL |
thinkingLevel | string | No | Reasoning depth: "off", "low", "medium", or "high" |
systemPrompt | string | No | Custom system prompt for this slot |
Thinking levels
Set thinkingLevel per slot to control reasoning depth. Each provider maps this to its native parameter:
| Level | Anthropic (budget_tokens) | OpenAI (reasoning_effort) | Gemini (thinkingBudget) |
|---|---|---|---|
off | disabled | disabled | disabled |
low | 4,096 | low | 4,096 |
medium | 16,384 | medium | 12,288 |
high | 32,768 | high | 24,576 |
You can also change thinking levels at runtime from the Settings screen.
Providers
The providers object maps provider names to their credentials. These are shared across all slots that reference the provider.
| Field | Type | Description |
|---|---|---|
apiKey | string | API key. Use "env:VAR_NAME" to read from environment. |
apiBase | string (URL) | Base URL for the API endpoint. |
Use the "env:VAR_NAME" syntax to keep secrets out of your config file. Iron Rain resolves these at runtime. Example: "apiKey": "env:ANTHROPIC_API_KEY"
Permissions
Control which tools require confirmation before running:
| Value | Behavior |
|---|---|
"allow" | Execute without confirmation |
"ask" | Prompt the user before executing |
"deny" | Block the tool entirely |
Use "*" as a wildcard key for the default policy, then override specific tools.
LCM (Context Management)
| Field | Type | Default | Description |
|---|---|---|---|
lcm.enabled | boolean | true | Enable episode tracking and context management |
lcm.episodes.maxEpisodeTokens | number | 4000 | Maximum tokens per episode summary |
MCP Servers
Iron Rain can connect to MCP (Model Context Protocol) servers. Tool descriptions from connected servers are injected into the system prompt automatically.
{
"mcpServers": {
"my-server": {
"command": "node",
"args": ["./mcp-server.js"]
}
}
}
| Field | Type | Description |
|---|---|---|
command | string | Command to launch the MCP server |
args | string[] | Arguments to pass to the command |
Connections are established at startup (non-blocking). Tool descriptions are merged into the system prompt for every dispatch.
Skills
Iron Rain discovers and loads skill files from your project. Skills provide slash commands that extend the TUI.
{
"skills": {
"autoDiscover": true,
"paths": ["./custom-skills"]
}
}
| Field | Type | Default | Description |
|---|---|---|---|
skills.autoDiscover | boolean | true | Automatically discover skill files in standard locations |
skills.paths | string[] | [] | Additional directories to search for skill files |
Auto-Commit
Automatically commit changes after each plan task completes.
| Field | Type | Default | Description |
|---|---|---|---|
autoCommit.enabled | boolean | false | Enable auto-commit after plan tasks |
autoCommit.messagePrefix | string | "iron-rain:" | Prefix for auto-commit messages |
Sandbox
Run bash commands in an isolated sandbox to prevent unintended side effects.
| Field | Type | Default | Description |
|---|---|---|---|
sandbox.backend | string | "none" | Sandbox backend: "none", "seatbelt" (macOS), "docker", or "gvisor" |
sandbox.allowNetwork | boolean | false | Allow network access inside the sandbox |
sandbox.allowedWritePaths | string[] | [] | Paths the sandbox can write to |
sandbox.docker.image | string | "node:20-slim" | Docker image (when backend is "docker") |
sandbox.docker.memoryLimit | string | "2g" | Memory limit for Docker container |
sandbox.docker.cpuLimit | string | "2" | CPU limit for Docker container |
Rules
Project rules are injected into every system prompt. Iron Rain looks for rules in IRON-RAIN.md, CLAUDE.md, and .iron-rain/rules/*.md.
| Field | Type | Default | Description |
|---|---|---|---|
rules.paths | string[] | [] | Additional paths to load rules from |
rules.disabled | boolean | false | Disable rules injection entirely |
Session
Control session persistence behavior.
| Field | Type | Default | Description |
|---|---|---|---|
session.autoResume | boolean | true | Auto-resume the last session on startup |
session.maxHistory | number | 50 | Maximum messages to retain in history |
Memory
Configure the persistent lesson system (~/.iron-rain/sessions.db).
| Field | Type | Default | Description |
|---|---|---|---|
memory.autoLearn | boolean | true | Automatically extract lessons from conversations |
memory.maxLessons | number | 50 | Maximum stored lessons (oldest are pruned) |
Repo Map
Auto-generated repository map injected into system prompts for structural awareness.
| Field | Type | Default | Description |
|---|---|---|---|
repoMap.enabled | boolean | true | Enable repo map generation |
repoMap.maxTokens | number | 2000 | Maximum token budget for the repo map |
The repo map walks the directory tree (respecting .ironrainignore and .gitignore), extracts file symbols (exports, functions, classes), and truncates to the token budget.
Costs
Override or add custom model pricing for cost tracking. Keys are model names, values specify per-token cost in dollars.
{
"costs": {
"my-custom-model": { "input": 0.001, "output": 0.002 },
"claude-opus-4-6": { "input": 0.015, "output": 0.075 }
}
}
Built-in models have default pricing. Use this section to override or add models not in the registry.
Plugins
Load plugins that hook into the dispatch lifecycle.
| Field | Type | Default | Description |
|---|---|---|---|
plugins.paths | string[] | [] | Paths to plugin files or directories |
plugins.hooks | object | {} | Map of hook name to script path for inline hooks |
See the Plugin SDK for details on creating plugins.
Context
Fine-tune context management behavior.
| Field | Type | Default | Description |
|---|---|---|---|
context.hotWindowSize | number | 6 | Number of recent messages kept verbatim |
context.maxContextTokens | number | 8000 | Maximum token budget for context |
context.maxFileSize | number | 102400 | Max file size for @ references (bytes) |
context.maxImageSize | number | 20971520 | Max image size for @ references (bytes) |
context.toolOutputMaxTokens | number | 2000 | Max tokens for tool output in context |
Resilience
Configure retry and circuit breaker behavior for API calls.
| Field | Type | Default | Description |
|---|---|---|---|
resilience.maxRetries | number | 3 | Maximum retries on transient failures (429, 5xx) |
resilience.circuitBreakerThreshold | number | 5 | Consecutive failures before circuit opens |
resilience.circuitBreakerResetMs | number | 60000 | Time before circuit breaker resets (ms) |
LSP (experimental)
Connect Language Server Protocol servers for enhanced code intelligence.
| Field | Type | Default | Description |
|---|---|---|---|
lsp.enabled | boolean | false | Enable LSP integration |
lsp.servers | object | {} | Map of language to LSP server config |
{
"lsp": {
"enabled": true,
"servers": {
"typescript": { "command": "typescript-language-server", "args": ["--stdio"] }
}
}
}
Voice (experimental)
Enable voice input for hands-free interaction.
| Field | Type | Default | Description |
|---|---|---|---|
voice.enabled | boolean | false | Enable voice input |
voice.engine | string | "system" | Speech engine: "whisper" or "system" |
voice.whisperModel | string | Whisper model name (when engine is "whisper") |
Remote Config
Fetch and merge a remote config at startup. Useful for team-wide settings.
{
"configUrl": "https://example.com/team-iron-rain.json"
}
The remote config is deep-merged with the local config. Local values take precedence. The fetch runs at startup with a 5-second timeout.
Other fields
| Field | Type | Default | Description |
|---|---|---|---|
agent | string | "build" | Active agent profile name |
theme | string | "default" | TUI color theme |