Run Claude Code and Aider with a 122B model running entirely on your Mac. No OpenAI bill, no cloud, no data leaving your device. Install and go.
No API keys, no cloud bills, no data leaving your device. Just your hardware doing what it was designed for.
Your prompts, your data, your models. Nothing ever touches a server. No accounts, no telemetry, no tracking.
Auto-benchmarks both runtimes and picks the fastest for your hardware. M5 Max hits 42 tok/s on 122B MoE.
Bonjour auto-discovery connects your Macs. The big GPU becomes the brain, laptops become relays. Zero config.
Drop-in replacement API on localhost:4001. Works with Claude Code, Aider, Cursor, Continue, and any OpenAI client.
Routes simple prompts to the fast 35B model, complex ones to the 122B. You just use "auto" as the model name.
QR-code pairing. Use your Mac's GPU from your iPhone. Voice input, chat UI, all powered by your own hardware.
Point any OpenAI-compatible tool at localhost:4001 and start coding.
NOU auto-detects your hardware tier and configures itself. Install on all your Macs — they find each other.
iPhone, iPad. Chat with AI using your Mac's power — just pair and talk.
MacBook Air. Run small tasks locally, offload heavy prompts to your bigger Mac.
MacBook Pro. Run most models yourself — code completion, analysis, chat.
Mac Studio / Pro. GPT-4-class output entirely on your desk. Serve your whole team.
NOU and Koe are part of the Enabler family. Both run 100% locally, respect your privacy, and are free and open source. Use them together for a complete local AI experience.
More from EnablerDAO →Free, open-source, yours forever. Drop into /Applications and go.