When AI lives inside apps, the cognitive load stays on you — what to do, how to prompt, which tool, what order. Autoclaw lives at the interface layer so you never carry it.
In Phase 1, Autoclaw runs a parallel low-intelligence pipeline between you and Claude Code — voice, screen, clipboard. Zero prompting. One-click approval.
Autoclaw collapses the cognitive load of prompt engineering, thinking like a PM, managing tasks, and monitoring your session. In upcoming phases, it learns your workflows and ambiently runs tasks it recognises.
A parallel low-intelligence pipeline at the interface between you and Claude Code. Voice in, prompts out. No prompt engineering. No project management. No context switching.
Watches your screen, hears your voice, and crafts the right prompt. You never describe context, explain what file you're in, or engineer a prompt again.
Thinks like a PM. Before you finish one task, Autoclaw reads the trajectory, sees the gaps, and queues the next prompt. You stay in flow — it handles the thinking ahead.
Todos surface from your conversations, your screen, your work. Autoclaw maintains them ambiently so you can stay in flow instead of managing lists.
Character-driven Claude Code sessions. Custom filler dialogue, voice caches, storylines. Building software should be entertaining — Theatre mode makes it so.
If it's in ~/.claude/mcp.json, Autoclaw already has it. No setup. No configuration. The interface layer inherits everything.
Add any MCP server to mcp.json — Autoclaw uses it immediately.
Pure Swift. Apple Neural Engine. No Electron. No cloud relay. Autoclaw doesn't compete with your tools — it sits between you and them, collapsing the space where cognitive load lives.
Apple VNFeaturePrint generates neural embeddings on-device. Zero API cost. Only meaningful screen changes trigger the interface layer.
Phase 1 runs through Claude Code's CLI with streaming JSON. Every MCP server, skill, and permission — inherited automatically.
Model Context Protocol gives the interface layer reach. One JSON config, infinite integrations. In upcoming phases, this extends beyond Claude Code.
Screen captures, voice, clipboard, and workflows never leave your laptop. Direct API calls to Anthropic. Nothing in between.

Autoclaw sits between you and your AI tools — it sees your screen, voice, and clipboard. That position demands trust. Everything stays local, permissions are structural, and the pipeline is fully auditable.
Everything runs on your machine. No cloud relay. No telemetry. Your screen captures, voice transcripts, and file contents never leave your laptop except as direct API calls to Anthropic.
Every line of Swift is on GitHub. Read the vision pipeline. Inspect the prompt crafting logic. Verify the interface layer. Fork it, modify it, audit it.