Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Codelessly/BrowserAgent

Repository files navigation

BrowserAI πŸš€

Run Production-Ready LLMs Directly in Your Browser

Simple β€’ Fast β€’ Private β€’ Open Source

Live Demo β€’ Documentation β€’ Discord Community

BrowserAI Demo

πŸŽ‰ Featured Project: Check out Browseragent.dev - A no-code AI Agent builder powered by BrowserAI with unlimited executions! Build your own AI agents in minutes.

🌟 Live Demos

Demo Description Try It
Chat Multi-model chat interface chat.browserai.dev
Voice Chat Full-featured with speech recognition & TTS voice-demo.browserai.dev
Text-to-Speech Powered by Kokoro 82M tts-demo.browserai.dev

⚑ Key Features

  • πŸ”’ 100% Private: All processing happens locally in your browser
  • πŸš€ WebGPU Accelerated: Near-native performance
  • πŸ’° Zero Server Costs: No complex infrastructure needed
  • 🌐 Offline Capable: Works without internet after initial download
  • 🎯 Developer Friendly: Simple sdk with multiple engine support
  • πŸ“¦ Production Ready: Pre-optimized popular models

🎯 Perfect For

  • Web developers building AI-powered applications
  • Companies needing privacy-conscious AI solutions
  • Researchers experimenting with browser-based AI
  • Hobbyists exploring AI without infrastructure overhead
  • No-code platform builders creating AI-powered tools

✨ Features

  • 🎯 Run AI models directly in the browser - no server required!
  • ⚑ WebGPU acceleration for blazing fast inference
  • πŸ”„ Seamless switching between MLC and Transformers engines
  • πŸ“¦ Pre-configured popular models ready to use
  • πŸ› οΈ Easy-to-use API for text generation and more
  • πŸ”§ Web Worker support for non-blocking UI performance
  • πŸ“Š Structured output generation with JSON schemas
  • πŸŽ™οΈ Speech recognition and text-to-speech capabilities
  • πŸ’Ύ Built-in database support for storing conversations and embeddings

πŸš€ Quick Start

npm install @browserai/browserai

OR

yarn add @browserai/browserai

Basic Usage

import { BrowserAI } from '@browserai/browserai';
const browserAI = new BrowserAI();
// Load model with progress tracking
await browserAI.loadModel('llama-3.2-1b-instruct', {
 quantization: 'q4f16_1',
 onProgress: (progress) => console.log('Loading:', progress.progress + '%')
});
// Generate text
const response = await browserAI.generateText('Hello, how are you?');
console.log(response.choices[0].message.content);

πŸ“š Examples

Text Generation with Options

const response = await browserAI.generateText('Write a short poem about coding', {
 temperature: 0.8,
 max_tokens: 100,
 system_prompt: "You are a creative poet specialized in technology themes."
});

Chat with System Prompt

const ai = new BrowserAI();
await ai.loadModel('gemma-2b-it');
const response = await ai.generateText([
 { role: 'system', content: 'You are a helpful assistant.' },
 { role: 'user', content: 'What is WebGPU?' }
]);

Chat with System Prompt

const response = await browserAI.generateText('List 3 colors', {
 json_schema: {
 type: "object",
 properties: {
 colors: {
 type: "array",
 items: {
 type: "object",
 properties: {
 name: { type: "string" },
 hex: { type: "string" }
 }
 }
 }
 }
 },
 response_format: { type: "json_object" }
});

Speech Recognition

const browserAI = new BrowserAI();
await browserAI.loadModel('whisper-tiny-en');
// Using the built-in recorder
await browserAI.startRecording();
const audioBlob = await browserAI.stopRecording();
const transcription = await browserAI.transcribeAudio(audioBlob, {
 return_timestamps: true,
 language: 'en'
});

Text-to-Speech

const ai = new BrowserAI();
await ai.loadModel('kokoro-tts');
const audioBuffer = await browserAI.textToSpeech('Hello, how are you today?', {
 voice: 'af_bella',
 speed: 1.0
});// Play the audio using Web Audio API
const audioContext = new AudioContext();
const source = audioContext.createBufferSource();
audioContext.decodeAudioData(audioBuffer, (buffer) => {
 source.buffer = buffer;
 source.connect(audioContext.destination);
 source.start(0);
});

πŸ”§ Supported Models

More models will be added soon. Request a model by creating an issue.

MLC Models

  • Llama-3.2-1b-Instruct
  • Llama-3.2-3b-Instruct
  • Hermes-Llama-3.2-3b
  • SmolLM2-135M-Instruct
  • SmolLM2-360M-Instruct
  • SmolLM2-1.7B-Instruct
  • Qwen-0.5B-Instruct
  • Gemma-2B-IT
  • TinyLlama-1.1B-Chat-v0.4
  • Phi-3.5-mini-instruct
  • Qwen3-0.6B
  • Qwen3-1.7B
  • Qwen3-4B
  • Qwen3-8B
  • Qwen2.5-1.5B-Instruct
  • DeepSeek-R1-Distill-Qwen-7B
  • DeepSeek-R1-Distill-Llama-8B
  • Snowflake-Arctic-Embed-M-B32
  • Snowflake-Arctic-Embed-S-B32
  • Snowflake-Arctic-Embed-M-B4
  • Snowflake-Arctic-Embed-S-B4

Transformers Models

  • Llama-3.2-1b-Instruct
  • Whisper-tiny-en (Speech Recognition)
  • Whisper-base-all (Speech Recognition)
  • Whisper-small-all (Speech Recognition)
  • Kokoro-TTS (Text-to-Speech)

πŸ—ΊοΈ Enhanced Roadmap

Phase 1: Foundation

  • 🎯 Simplified model initialization
  • πŸ“Š Basic monitoring and metrics
  • πŸ” Simple RAG implementation
  • πŸ› οΈ Developer tools integration

Phase 2: Advanced Features

  • πŸ“š Enhanced RAG capabilities
    • Hybrid search
    • Auto-chunking
    • Source tracking
  • πŸ“Š Advanced observability
    • Performance dashboards
    • Memory profiling
    • Error tracking

Phase 3: Enterprise Features

  • πŸ” Security features
  • πŸ“ˆ Advanced analytics
  • 🀝 Multi-model orchestration

🀝 Contributing

We welcome contributions! Feel free to:

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • MLC AI for their incredible mode compilation library and support for webgpu runtime and xgrammar
  • Hugging Face and Xenova for their Transformers.js library, licensed under Apache License 2.0. The original code has been modified to work in a browser environment and converted to TypeScript.
  • All our contributors and supporters!

Made with ❀️ for the AI community

πŸš€ Requirements

  • Modern browser with WebGPU support (Chrome 113+, Edge 113+, or equivalent)
  • For models with shader-f16 requirement, hardware must support 16-bit floating point operations

About

Run local LLMs like llama, deepseek-distill, kokoro and more inside your browser

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 98.7%
  • Other 1.3%

AltStyle γ«γ‚ˆγ£γ¦ε€‰ζ›γ•γ‚ŒγŸγƒšγƒΌγ‚Έ (->γ‚ͺγƒͺγ‚ΈγƒŠγƒ«) /