The Privacy-First
AI Chat Interface
Blazingly fast, offline-ready, and completely open source. Run with Ollama locally or connect to OpenAI, Anthropic, Groq, Mistral, and more without giving up your data.
Built on the Bleeding Edge
We chose the fastest tools available. No bloat, no legacy code. Just pure performance and standard web technologies.
Experience the Speed
Zero SSR overhead. Streaming responses. Instant local saves.
Why Faster Chat?
Control your AI conversations.
No strings attached.
Built for developers who want the power of the cloud with the privacy of localhost.
Offline-Ready
Use completely offline with local LLMs via Ollama, LM Studio, or any local inference server.
Privacy-First
All conversations stored locally in your browser (IndexedDB). No cloud requirement.
Provider Hub
Admin panel to connect any AI provider. Auto-discover models with models.dev integration. Add custom OpenAI-compatible APIs.
Insanely Fast
Built on Preact & Hono. 3KB runtime, zero SSR overhead, streaming responses.
Self-Hostable
One-command Docker deployment with optional HTTPS. No vendor lock-in.
Multi-User Auth
Role-based access control with admin, member, and readonly roles. Full user management panel.
Beautiful UI
Catppuccin color scheme, markdown support, code highlighting, and smooth animations.
Local Persistence
Dexie.js ensures your chats are saved instantly to your device, with optional SQLite sync.
Get Started in Seconds
git clone https://github.com/1337hero/faster-chat.git
cd faster-chat
bun install
bun run dev
One-command deployment
Recommended runtime
v20+ Supported