Streamline your AI workflow with a unified, elegant interface for multiple AI providers
Working with different AI models shouldn't feel like herding cats. AiFlow provides a consistent, developer-friendly interface that makes integrating AI into your Elixir applications a breeze. Start with Ollama today, with more providers coming soon.
# Ask any question - it's that simple! {:ok, response} = AiFlow.Ollama.query("Explain quantum computing in simple terms", "llama3.1")
One interface, multiple AI providers. Switch between services without rewriting your code.
Built-in error handling, debugging tools, and comprehensive testing.
- π§ Model Management: List, create, copy, delete, pull, and push models
- π¬ Smart Chat Sessions: Persistent chat history with automatic context management
- βοΈ Text Generation: Powerful prompt completion with customizable parameters
- π Embeddings: Generate vector embeddings for semantic search and ML tasks
- π Blob Operations: Efficient model file management
- π‘οΈ Robust Error Handling: Comprehensive error management with bang (!) versions
- π Advanced Debugging: Built-in tools for troubleshooting and development
Add ai_flow to your list of dependencies in mix.exs:
def deps do [ {:ai_flow, "~> 0.1.0"} ] end
# Quick start with defaults {:ok, pid} = AiFlow.Ollama.start_link() # Or customize your setup {:ok, pid} = AiFlow.Ollama.start_link( hostname: "localhost", port: 11434, timeout: 60_000 )
# Simple question {:ok, response} = AiFlow.Ollama.query("Why is the sky blue?", "llama3.1") # Interactive chat {:ok, response} = AiFlow.Ollama.chat("Hello!", "chat_session_1", "user_123", "llama3.1") {:ok, response} = AiFlow.Ollama.chat("Tell me more about that", "chat_session_1", "user_123", "llama3.1")
# Generate embeddings for semantic search {:ok, embeddings} = AiFlow.Ollama.generate_embeddings([ "The cat sat on the mat", "A feline rested on the rug" ]) # Manage your models {:ok, models} = AiFlow.Ollama.list_models() {:ok, :success} = AiFlow.Ollama.create_model("my-custom-model", "llama3.1", "You are a helpful coding assistant.")
Work with AI models intuitively:
AiFlow.Ollama.list_models()- Discover available modelsAiFlow.Ollama.query()- Ask questions to any modelAiFlow.Ollama.chat()- Engage in persistent conversations
# Everything you need to manage AI models AiFlow.Ollama.list_models() AiFlow.Ollama.create_model("my-model", "base-model", "system prompt") AiFlow.Ollama.copy_model("original", "backup") AiFlow.Ollama.delete_model("old-model") AiFlow.Ollama.pull_model("new-model") AiFlow.Ollama.push_model("my-model:latest")
Flexible configuration for any environment:
# Application-wide configuration config :ai_flow, AiFlow.Ollama, hostname: "localhost", port: 11434, timeout: 60_000 # Or per-instance configuration {:ok, pid} = AiFlow.Ollama.start_link( hostname: "production-ai.internal", port: 11434, timeout: 120_000 )
AiFlow is just getting started! Upcoming integrations include:
- π Bumblebee Integration: Hugging Face models support
- βοΈ Cloud AI Providers: OpenAI, Anthropic, Google AI
- π¦ Model Registry: Centralized model management
- β‘ Performance Optimizations: Caching and batching
We love contributions! Here's how to get started:
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
Full API documentation is available at HexDocs.
Distributed under the MIT License. See LICENSE for more information.
- Found a bug? Open an issue
- Have a feature request? We'd love to hear it!
- Questions? Check out the documentation or open a discussion
Made with β€οΈ for the Elixir community