Loading...
Loading...
Real numbers from real workflows. No cherry-picked demos.
Shared persistent context means agents start informed — fewer redundant tokens per task.
| Task | CHAOS-AI | Traditional | Savings |
|---|---|---|---|
| Multi-file refactor | ~45k tokens | ~120k tokens | 62% savings |
| Test generation (10 files) | ~30k tokens | ~80k tokens | 63% savings |
| Code review (PR with 15 files) | ~20k tokens | ~55k tokens | 64% savings |
| Documentation sync | ~15k tokens | ~40k tokens | 63% savings |
| Bug investigation + fix | ~25k tokens | ~60k tokens | 58% savings |
Parallel agent execution vs. single-agent sequential workflows.
| Task | CHAOS-AI | Single Agent | Speedup |
|---|---|---|---|
| Full project audit (lint + test + security) | ~2 min | ~8 min | 4x |
| Feature implementation (plan + code + test) | ~5 min | ~15 min | 3x |
| PR review + feedback | ~1 min | ~4 min | 4x |
| Documentation generation (10 modules) | ~3 min | ~12 min | 4x |
Use any provider — or run fully local with Ollama. Your choice, your keys, your machine.
| Provider | Models | Status |
|---|---|---|
| OpenAI | GPT-4o, GPT-4.1, o1, o3 | supported |
| Anthropic | Claude Opus, Sonnet, Haiku | supported |
| Gemini 2.5 Pro, Flash | supported | |
| Ollama (local) | Llama, Mistral, CodeLlama, etc. | supported |
| Azure OpenAI | All Azure-hosted models | supported |
| AWS Bedrock | Claude, Titan, Llama | beta |
| Custom endpoint | Any OpenAI-compatible API | supported |