A fast, self-contained AI coding assistant that lives entirely in your terminal. Multi-model routing, native Anthropic/OpenAI/Gemini providers, CLI bridges, Charm v2 TUI, cost ceilings, fallback chains, smart model routing, and an interactive settings page — all in one binary.
No cloud syncing, no Electron, no runtime. Just a compiled binary that talks to the AI providers you already use.
Three purpose-built lenses — Focus for deep reasoning, Scan for fast exploration, Craft for code generation — automatically route to the right model and provider for the task.
Anthropic, OpenAI-compatible, Bedrock OpenAI-compatible, Google AI native Gemini, Ollama (local), Claude CLI, and Codex CLI — swap providers without changing your workflow. Use native API bridges where possible and subscription-backed CLI bridges where they offer the strongest parity.
Bash execution, file read/write/edit/glob/grep, web search, sub-agent dispatch, git operations, LSP integration, and more — all wired directly into the model's tool-use protocol.
Conversations are persisted as typed episodes. LCM dual-threshold compaction automatically summarises old context, keeping the window fresh without losing history.
Charm v2 TUI with markdown rendering, syntax-highlighted code blocks, interactive settings page, permission dialogs, model picker, vim keybindings, and a two-section status bar.
Cost ceilings, step limits, query duration caps, permission pre-flight, fallback chains, and boulder recovery. OAuth PKCE with OS keychain. Bash safety analysis with 24 dangerous patterns.
Three install paths, one command to start. Oculus auto-detects your API keys and walks you through auth on first run.
go install ...@latest
brew install ...
npm i -g ...
Oculus ships bridges for every major AI provider and supports fully local inference through Ollama.
Claude Opus 4.6, Sonnet 4.6, Haiku 4.5. Native streaming, extended thinking, and tool-use.
GPT-4o, o3, and any provider exposing the OpenAI Chat Completions spec.
Run Llama 3, Mistral, Qwen, and hundreds of open models entirely on-device.
Delegates to the Anthropic claude CLI binary when present.
OpenAI Codex bridge via subprocess, with streamed output forwarding.
Native Gemini API bridge via the official Go SDK. Gemini CLI remains available as a lighter-weight bridge when you want to use the local CLI directly.
Install in seconds. No sign-up required — just bring your own API key or use an Ollama local model.