Stop wrestling with prompts. Start shipping AI features.
Ax brings DSPy's approach to TypeScript – describe what you want, and let the framework handle the rest. Production-ready, type-safe, works with all major LLMs.
NPM Package Twitter Discord Chat
Building with LLMs is painful. You write prompts, test them, they break. You switch providers, everything needs rewriting. You add validation, error handling, retries – suddenly you're maintaining infrastructure instead of shipping features.
Define what goes in and what comes out. Ax handles the rest.
import { ai, ax } from "@ax-llm/ax"; const llm = ai({ name: "openai", apiKey: process.env.OPENAI_APIKEY }); const classifier = ax( 'review:string -> sentiment:class "positive, negative, neutral"', ); const result = await classifier.forward(llm, { review: "This product is amazing!", }); console.log(result.sentiment); // "positive"
No prompt engineering. No trial and error. Works with GPT-4, Claude, Gemini, or any LLM.
Write once, run anywhere. Switch between OpenAI, Anthropic, Google, or 15+ providers with one line. No rewrites.
Ship faster. Stop tweaking prompts. Define inputs and outputs. The framework generates optimal prompts automatically.
Production-ready. Built-in streaming, validation, error handling, observability. Used in production handling millions of requests.
Gets smarter. Train your programs with examples. Watch accuracy improve automatically. No ML expertise needed.
const extractor = ax(` customerEmail:string, currentDate:datetime -> priority:class "high, normal, low", sentiment:class "positive, negative, neutral", ticketNumber?:number, nextSteps:string[], estimatedResponseTime:string `); const result = await extractor.forward(llm, { customerEmail: "Order #12345 hasn't arrived. Need this resolved immediately!", currentDate: new Date(), });
import { f, ax } from "@ax-llm/ax"; const productExtractor = f() .input("productPage", f.string()) .output("product", f.object({ name: f.string(), price: f.number(), specs: f.object({ dimensions: f.object({ width: f.number(), height: f.number() }), materials: f.array(f.string()) }), reviews: f.array(f.object({ rating: f.number(), comment: f.string() })) })) .build(); const generator = ax(productExtractor); const result = await generator.forward(llm, { productPage: "..." }); // Full TypeScript inference console.log(result.product.specs.dimensions.width); console.log(result.product.reviews[0].comment);
const userRegistration = f() .input("userData", f.string()) .output("user", f.object({ username: f.string().min(3).max(20), email: f.string().email(), age: f.number().min(18).max(120), password: f.string().min(8).regex("^(?=.*[A-Za-z])(?=.*\\d)", "Must contain letter and digit"), bio: f.string().max(500).optional(), website: f.string().url().optional(), })) .build();
Available constraints: .min(n), .max(n), .email(), .url(), .date(), .datetime(), .regex(pattern, description), .optional()
Validation runs on both input and output. Automatic retry with corrections on validation errors.
const assistant = ax( "question:string -> answer:string", { functions: [ { name: "getCurrentWeather", func: weatherAPI }, { name: "searchNews", func: newsAPI }, ], }, ); const result = await assistant.forward(llm, { question: "What's the weather in Tokyo and any news about it?", });
const analyzer = ax(` image:image, question:string -> description:string, mainColors:string[], category:class "electronics, clothing, food, other", estimatedPrice:string `);
npm install @ax-llm/ax
Additional packages:
# AWS Bedrock provider npm install @ax-llm/ax-ai-aws-bedrock # Vercel AI SDK v5 integration npm install @ax-llm/ax-ai-sdk-provider # Tools: MCP stdio transport, JS interpreter npm install @ax-llm/ax-tools
- 15+ LLM Providers – OpenAI, Anthropic, Google, Mistral, Ollama, and more
- Type-safe – Full TypeScript support with auto-completion
- Streaming – Real-time responses with validation
- Multi-modal – Images, audio, text in the same signature
- Optimization – Automatic prompt tuning with MiPRO, ACE, GEPA
- Observability – OpenTelemetry tracing built-in
- Workflows – Compose complex pipelines with AxFlow
- RAG – Multi-hop retrieval with quality loops
- Agents – Tools and multi-agent collaboration
- Zero dependencies – Lightweight, fast, reliable
Get Started
- Quick Start Guide – Set up in 5 minutes
- Examples Guide – Comprehensive examples
- DSPy Concepts – Understanding the approach
- Signatures Guide – Type-safe signature design
Deep Dives
- AI Providers – All providers, AWS Bedrock, Vercel AI SDK
- AxFlow Workflows – Build complex AI systems
- Optimization (MiPRO, ACE, GEPA) – Make programs smarter
- Advanced RAG – Production search and retrieval
OPENAI_APIKEY=your-key npm run tsx ./src/examples/[example-name].ts
Core examples: extract.ts, react.ts, agent.ts, streaming1.ts, multi-modal.ts
Production patterns: customer-support.ts, food-search.ts, ace-train-inference.ts, ax-flow-enhanced-demo.ts
- Twitter – Updates
- Discord – Help and discussion
- GitHub – Star the project
- DeepWiki – AI-powered docs
- Battle-tested in production
- Stable minor versions
- Comprehensive test coverage
- OpenTelemetry built-in
- TypeScript first
- Author: @dosco
- GEPA and ACE optimizers: @monotykamary
Apache 2.0
npm install @ax-llm/ax