Getting Started
Go from zero to multi-model orchestration in under 5 minutes.
You need Node.js 18+ installed. We recommend Bun for speed, but npm/yarn/pnpm all work.
Install Iron Rain
The quickest way to install is the one-line curl installer:
curl -fsSL https://raw.githubusercontent.com/howlerops/iron-rain/main/scripts/install.sh | bash
This auto-detects your OS, installs Bun if needed, and installs the iron-rain CLI globally.
You can also install manually with your preferred package manager:
# With npm
npm install @howlerops/iron-rain
npm install -g @howlerops/iron-rain-cli
# With bun
bun add @howlerops/iron-rain
bun add -g @howlerops/iron-rain-cli
# Or run the CLI directly without installing
npx @howlerops/iron-rain-cli
Run the setup wizard
Launch Iron Rain for the first time — if no config file exists, the built-in onboarding wizard starts automatically:
iron-rain
The wizard walks you through four steps:
- Select providers — choose from Ollama (local), Anthropic, OpenAI, Gemini, Claude Code, Codex, or Gemini CLI
- Enter credentials — API keys, connection details, or use
env:VAR_NAMEto reference environment variables - Assign model slots — pick a model for each slot (Main, Explore, Execute)
- Review & save — confirm your choices and write
iron-rain.json
You can also create iron-rain.json by hand. See the Configuration reference for the full schema, or the Providers guide for per-provider setup.
After onboarding, use the Settings screen inside the TUI to change providers, models, and permissions at any time.
Run your first prompt
Once the wizard saves your config, you can verify everything works in headless mode:
iron-rain --headless "Explain what a monad is in one sentence"
You should see the model's response followed by token and timing stats:
A monad is a design pattern that wraps values in a context...
[success] 142 tokens, 850ms
Analyze your project
Run the /init command to give Iron Rain full context on your codebase:
/init
This maps your repository structure, reads package.json metadata, detects config files, and dispatches an architecture review. Findings are stored as persistent lessons that survive across sessions, so Iron Rain always knows your project's conventions.
Mix models across slots
The real power of Iron Rain is using different models for different tasks. During onboarding (or via the Settings screen), assign specialized models to each slot:
- Cortex (strategy & planning) → e.g. Claude Sonnet
- Scout (search & research) → e.g. Ollama qwen2.5-coder:32b (free, local)
- Forge (code edits & commands) → e.g. GPT-4o
The equivalent iron-rain.json looks like:
{
"slots": {
"main": { "provider": "anthropic", "model": "claude-sonnet-4-20250514" },
"explore": { "provider": "ollama", "model": "qwen2.5-coder:32b" },
"execute": { "provider": "openai", "model": "gpt-4o" }
},
"providers": {
"anthropic": { "apiKey": "env:ANTHROPIC_API_KEY" },
"openai": { "apiKey": "env:OPENAI_API_KEY" },
"ollama": { "apiBase": "http://localhost:11434" }
}
}
Now planning tasks route to Claude, search tasks to Ollama, and code edits to GPT-4o — all automatic.
Use it programmatically
Import the core package in your TypeScript/JavaScript project:
import {
ModelSlotManager,
OrchestratorKernel,
createBridgeForSlot,
loadConfig
} from '@howlerops/iron-rain';
// Load config from iron-rain.json
const config = await loadConfig();
const slots = new ModelSlotManager(config.slots);
const kernel = new OrchestratorKernel(slots);
// Dispatch a task
const episode = await kernel.dispatch({
id: '1',
prompt: 'Explain this codebase',
toolType: 'strategy' // routes to 'main' slot
});
console.log(episode.result);
console.log(`${episode.tokens} tokens, ${episode.duration}ms`);
Use @ references for context
Inject files, directories, git state, or images directly into your prompt with @ prefixes:
# Inject a file's contents
@./src/index.ts explain this file
# Inject git diff
@git:diff what changed?
# Inject a directory listing
@dir:src/components/ what's the structure?
# Attach an image for multimodal models
@./screenshot.png implement this design
# Combine multiple references
@./README.md @git:status summarize the project state
You can also add external directories to the context scope with /context add ../path.
Inject context mid-stream
While the agent is streaming a response, type additional instructions and press Enter. The stream pauses, your context is injected, and the agent continues with the updated information. Press Esc to cancel entirely.
Plan & execute complex tasks
For larger features, use the built-in planner:
/plan Add user authentication with JWT tokens
Cortex generates a PRD and breaks it into tasks. Review with approve, reject, or edit <feedback>. Once approved, Forge executes each task sequentially with auto-commit support.
For iterative work, use the loop command:
/loop Fix failing tests --until "ALL TESTS PASSING"