v0.2.0 Now Available

The Privacy-First AI Chat Interface

Blazingly fast, offline-ready, and completely open source. Run with Ollama locally or connect to OpenAI, Anthropic, Groq, Mistral, and more without giving up your data.

100% Local Storage
Zero Latency UI
Dexie + SQLite

Built on the Bleeding Edge

We chose the fastest tools available. No bloat, no legacy code. Just pure performance and standard web technologies.

Preact
3KB React alternative
Hono
Ultrafast web framework
Tailwind 4
Utility-first CSS
Vercel AI SDK
Stream handling
Bun
Fast JS Runtime
Dexie
IndexedDB wrapper
TanStack
Type-safe routing
Docker
Containerization

Experience the Speed

Zero SSR overhead. Streaming responses. Instant local saves.

localhost:3000
Watch Demo Video (Placeholder)

Why Faster Chat?

Control your AI conversations.
No strings attached.

Built for developers who want the power of the cloud with the privacy of localhost.

Offline-Ready

Use completely offline with local LLMs via Ollama, LM Studio, or any local inference server.

Privacy-First

All conversations stored locally in your browser (IndexedDB). No cloud requirement.

Provider Hub

Admin panel to connect any AI provider. Auto-discover models with models.dev integration. Add custom OpenAI-compatible APIs.

Insanely Fast

Built on Preact & Hono. 3KB runtime, zero SSR overhead, streaming responses.

Self-Hostable

One-command Docker deployment with optional HTTPS. No vendor lock-in.

Multi-User Auth

Role-based access control with admin, member, and readonly roles. Full user management panel.

Beautiful UI

Catppuccin color scheme, markdown support, code highlighting, and smooth animations.

Local Persistence

Dexie.js ensures your chats are saved instantly to your device, with optional SQLite sync.

Get Started in Seconds

git clone https://github.com/1337hero/faster-chat.git
cd faster-chat
bun install
bun run dev
Docker

One-command deployment

Bun

Recommended runtime

Node.js

v20+ Supported