Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

murabcd/tokenizer

Repository files navigation

AI Model Tokenization with Real-time Token Counting

AI Model Tokenization with Real-time Token Counting

Features · Deploy your own · Running locally


Features

  • Next.js
    • App Router with file-based routing and server components
    • Built-in API routes for token counting
  • Shadcn/ui
    • Beautiful, accessible UI components built with Radix UI
    • Custom components for consistent design and developer experience
  • Multiple AI Models
    • Support for various language models from SST's Models.dev database
    • Comprehensive model specifications, pricing, and capabilities data

Deploy your own

You can deploy your own version of Tokenizer to Vercel with one click:

Deploy with Vercel

Running locally

You will need to use the environment variables defined in .env.example to run Tokenizer. It's recommended you use Vercel Environment Variables for this, but a .env file is all that is necessary.

Note: You should not commit your .env file or it will expose secrets that will allow others to control access to your various accounts.

  1. Clone the repository: git clone https://github.com/muradpm/tokenizer.git
  2. Install Vercel CLI: bun i -g vercel
  3. Link local instance with Vercel and GitHub accounts (creates .vercel directory): vercel link
  4. Download your environment variables: vercel env pull
bun install
bun dev

Your app should now be running on localhost:3000

About

AI Model Tokenization with Realtime Token Counting Built with Next.js

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

AltStyle によって変換されたページ (->オリジナル) /