Turn your voice into text,
right from the terminal
Record, transcribe, and paste — in a single command.
Everything runs locally. No cloud APIs, no accounts, works offline.
Quick install - Linux & macOS:
curl -fsSL https://raw.githubusercontent.com/fmueller/voxclip/main/scripts/install-voxclip.sh | shWhy Voxclip?
Private by design
No one hears what you say except your own machine. No accounts, no cloud, no data collection — ever.
Works offline
Download a small speech model once, then forget about the internet. Works on planes, in cafés, anywhere.
One command
voxclip — that’s it. Speak, press Enter, and the transcript is on your clipboard ready to paste.
How it works
Install Voxclip
curl -fsSL https://raw.githubusercontent.com/fmueller/voxclip/main/scripts/install-voxclip.sh | shDownloads the correct binary for your OS and architecture. No build tools needed.
Download a speech model
voxclip setupDownloads and verifies the default speech model. This is a one-time download — after that, everything runs offline.
Record and transcribe
voxclipSpeak into your microphone, press Enter to stop. The transcript prints in your terminal and is copied to your clipboard, ready to paste.
Built for real workflows
Think faster than you type
Speak complex instructions to Claude Code, aider, or any AI tool instead of typing paragraphs. The transcript lands on your clipboard.
Dictate anywhere
Press a hotkey, speak, and the transcript is pasted into whatever app is active — browser, email, chat. No cloud, no app switching.
Never lose a thought
Speak a thought mid-task and it’s saved as a timestamped line in a plain text file. No context switch, no friction.
Features
Cross-platform
Works on Linux and macOS with automatic backend detection for PipeWire, ALSA, and ffmpeg.
Clipboard integration
Transcripts land on your clipboard automatically — paste into your editor, chat, or terminal.
Composable CLI
Subcommands for recording, transcription, and device management. Pipe and script them as you need.
Open-source models
Powered by OpenAI’s Whisper models running locally via whisper.cpp. No proprietary dependencies.