Providers
Configure LLM provider adapters for CHAOS-AI — Claude, Copilot, Gemini, Aider, OpenRouter, and Ollama.
Supported Providers
CHAOS ships with six provider adapters. Each is independently configurable — assign different providers to different agents to optimize for cost, quality, and privacy.
| Provider | Type | Best For |
|---|---|---|
| Claude Code (Anthropic) | Cloud | Long-context reasoning, code generation |
| GitHub Copilot CLI | Cloud | Editor-integrated, subscription workflows |
| Gemini CLI (Google) | Cloud | Multimodal, 1M token context |
| Aider | Cloud/Local | Pair programming mode |
| OpenRouter | Cloud Gateway | Route to any supported model |
| Ollama | Local | Air-gapped, zero cost, full privacy |
Claude Code (Anthropic)
# .env
ANTHROPIC_API_KEY=sk-ant-...
Recommended models: claude-opus-4-7 (reasoning), claude-sonnet-4-6 (balanced), claude-haiku-4-5 (fast/cheap).
GitHub Copilot CLI
Authenticate via the GitHub CLI — no separate API key needed:
gh auth login
gh extension install github/gh-copilot
Gemini CLI (Google)
# .env
GOOGLE_API_KEY=AI...
Gemini 2.5 Pro supports 1M token context windows — useful for large codebase analysis tasks.
Aider
Aider runs as a subprocess. Ensure it is installed:
pip install aider-install
aider --install-main-branch
CHAOS invokes Aider with the appropriate model flag based on your provider config.
OpenRouter
OpenRouter provides access to Claude, GPT, Mistral, LLaMA, and dozens of other models through a single API key:
# .env
OPENROUTER_API_KEY=sk-or-...
Specify the model in your agent definition using OpenRouter's provider/model format:
model: openrouter/anthropic/claude-opus-4-7
Ollama (Local Models)
Ollama runs models locally with no API calls:
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a model
ollama pull llama3.2
ollama pull codellama:13b
# Start the server
ollama serve
No API key needed. Set the model in your agent definition:
model: ollama/codellama:13b
Recommended local models:
codellama:13b— code generationllama3.2— general reasoningdeepseek-coder-v2— code-focused reasoning
Mixing Providers
Assign different providers to different agents in the same pipeline. In your agent definition YAML frontmatter:
# .claude/agents/security-agent.md
---
name: security-agent
model: claude-opus-4-7 # heavyweight reasoning for security
---
# .claude/agents/test-agent.md
---
name: test-agent
model: ollama/codellama:13b # local model, no cost
---
The PM engine dispatches each agent to its configured provider independently.
Next Steps
- Pipelines — Combine agents and providers into workflows
- MCP — Connect your editor via MCP
- Configuration — Full environment variable reference