Talk to your Claude
Ultra-fast local TTS for AI agents. Your agent speaks, you hear it instantly. No cloud. No API keys. No latency.
npx skills add EmZod/Speak-TurboWorks with Claude Code, Amp, Opencode, Pi, Cursor
Why Speak Turbo?
Your agents work hard. But they're silent. You're juggling terminals, missing outputs, losing context. Give them a voice.
Instant
~90ms to first sound. Cloud TTS takes 500ms+. This feels like your agent is in the room.
Private
100% local. Your text never leaves your machine. No API keys. No cloud bills.
Agent-native
Built for CLI. Your agent calls speakturbo and speaks.
When you need it
Real scenarios from agent users
“Running 4 Claude sessions. Need to know which one just finished.”
→ Agent speaks: “Session 3 complete”
“Long task running. Don't want to keep checking.”
→ Agent speaks when it needs you
“Want to vibe with my model, not stare at walls of text.”
→ Conversational, multimodal flow
“Don't want to pay ElevenLabs for API calls.”
→ Free forever. Local. No limits.
How it works
Lightweight. Fast. Local.
┌─────────────────┐
│ speakturbo │
│ (Rust, 2.2MB) │
└────────┬────────┘
│ HTTP :7125
▼
┌─────────────────┐
│ daemon │
│ (Python + MLX) │
└────────┬────────┘
│
▼
┌─────────────────┐
│ Audio Output │
└─────────────────┘Get started
One command. Apple Silicon.
For AI Agents
npx skills add EmZod/Speak-TurboCLI only
pip install pocket-tts uvicorn fastapi