One Line of Code, Access to 100+ AI Models
Model API Hub is a unified Python SDK that lets you access multiple AI model APIs across different platforms and modalities with a consistent, simple interface.
Stop juggling different SDKs for each provider. Use one library for everything.
# Same interface, different providers from model_api_hub import deepseek_chat, siliconflow_chat, kimi_chat # All work the same way response = deepseek_chat("Hello!") response = siliconflow_chat("Hello!") response = kimi_chat("Hello!")
| Feature | Description |
|---|---|
| 18+ LLM Providers | OpenAI, Anthropic, DeepSeek, ZhipuAI, Kimi, SiliconFlow, and more |
| Streaming Support | Real-time streaming responses for all major providers |
| 5 Modalities | LLM, Vision-Language, Image Gen, Audio TTS, Video Gen |
| One-Line Setup | pip install model-api-hub and you're ready |
| Unified API | Same interface across all providers |
| Flexible Config | .env, YAML, or direct API keys |
| CLI Included | Test models directly from command line |
| Type Hints | Full type safety support |
pip install model-api-hub
# Create .env file echo 'DEEPSEEK_API_KEY=your_key_here' > .env
from model_api_hub import deepseek_chat # That's it. You're done. response = deepseek_chat("Explain quantum computing in simple terms") print(response)
from model_api_hub import deepseek_chat, kimi_chat, siliconflow_chat, stepfun_chat # DeepSeek response = deepseek_chat( "Write a Python function to sort a list", system_prompt="You are a coding expert." ) # Kimi (Moonshot) response = kimi_chat( "Summarize this article", temperature=0.5 ) # SiliconFlow - access 50+ models response = siliconflow_chat("Hello!", model="deepseek-ai/DeepSeek-V3") # StepFun - OpenAI compatible endpoint response = stepfun_chat( "你好,请介绍一下阶跃星辰的人工智能!", system_prompt=( "你是由阶跃星辰提供的AI聊天助手,你擅长中文、英文以及多种其他语言的对话。" "在保证用户数据安全的前提下,你能对用户的问题和请求作出快速和精准的回答。" "同时,你的回答和建议应该拒绝黄赌毒、暴力恐怖主义的内容。" ), )
from model_api_hub import deepseek_chat_stream # Stream responses in real-time for chunk in deepseek_chat_stream("Tell me a long story"): print(chunk, end="", flush=True)
from model_api_hub.api.llm.deepseek_llm import create_client, get_completion client = create_client() messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What is Python?"}, {"role": "assistant", "content": "Python is a programming language..."}, {"role": "user", "content": "What are its main features?"} ] response = get_completion(client, messages)
from model_api_hub.api.vlm.openai_vlm import chat response = chat( prompt="What's in this image?", image_path="photo.jpg" )
from model_api_hub.api.image.siliconflow_image_gen import generate image_url = generate("A beautiful sunset over mountains")
from model_api_hub.api.audio.openai_tts import synthesize audio = synthesize("Hello, world!", voice="alloy", output_path="hello.mp3")
• DeepSeek-R1
• DeepSeek-Coder-V2
• DeepSeek-V3
• GPT-4o
• GPT-4o-mini
• GPT-4-Turbo
• Claude-Sonnet-4.5
• Claude-Opus-4
• Gemini-Pro
• Gemini-Flash
• GLM-4.7-Flash
• GLM-4
• GLM-4-Plus
• GLM-4.5-Air
• GLM-4.1-Thinking
• Moonshot-v1-128k
• Moonshot-v1-32k • Kimi-K2
• MiniMax-ABAB6.5s
• MiniMax-M2
• ERNIE-4.0
• ERNIE-4.5
• Qwen-Max
• Qwen-Plus
• Qwen-Turbo
• Qwen2.5
• Qwen2.5-Coder
• Qwen3
• Qwen2
• Qwen 1.5
• Llama4
• Llama3.1
• Llama3-70B
• Gemma3
• Gemma-2 • Mistral-Large
• Mixtral-8x22B
• Command-R-Plus
• InternLM3
• InternLM
• InternLM2-20B
• Baichuan
• Yi 零一万物
• Yuan2.0
• Yuan2.0-M32
• Hunyuan-A13B
• Hunyuan3D-2
• Spark-v3.5
• Phi4
• Phi-3
• MiniCPM
• CharacterGLM
• GPT-4V
• Gemini-Pro-Vision
• Qwen3-VL
• Qwen2-VL
• Qwen-VL-Plus • GLM-4V
• MiniCPM-o
• Yi-VL
• InternVL
• DeepSeek-VL • SpatialLM
• LLaVA
• CogVLM
• BlueLM-Vision
• DALL-E 2
• Kolors
• Stable Diffusion XL
• Stable Diffusion 3 • Recraft-v3
• Wanx
• ERNIE-ViLG
• Jimeng (Dreamina)
• CogView • Hunyuan-Image
• Playground-v2
• Kandinsky
• DeepFloyd IF
• Whisper-Large-v3
• TTS-1
• TTS-1-HD
• ElevenLabs-Multilingual-v2 • ElevenLabs-Flash
• Azure-TTS
• Azure-Speech
• MiniMax-TTS
• Baidu-TTS • Qwen-Audio
• ChatTTS
• Fish-Speech
• GPT-SoVITS
Create a .env file in your project root:
# LLM Providers OPENAI_API_KEY=sk-... ANTHROPIC_API_KEY=sk-... DEEPSEEK_API_KEY=sk-... KIMI_API_KEY=sk-... ZHIPUAI_API_KEY=... SILICONFLOW_API_KEY=sk-... MINIMAX_API_KEY=... YIYAN_API_KEY=... DASHSCOPE_API_KEY=sk-... MODELSCOPE_API_KEY=ms-... XUNFEI_SPARK_API_KEY=... GROQ_API_KEY=gsk_... TOGETHER_API_KEY=... MISTRAL_API_KEY=... COHERE_API_KEY=... PERPLEXITY_API_KEY=pplx-... AZURE_OPENAI_API_KEY=... STEP_API_KEY=... # Other Services ELEVENLABS_API_KEY=... AZURE_SPEECH_KEY=... STABILITY_API_KEY=... RECRAFT_API_KEY=... RUNWAY_API_KEY=... LUMA_API_KEY=...
Create config.yaml:
llm: openai: model: "gpt-4o" temperature: 0.7 max_tokens: 4096 deepseek: model: "deepseek-chat" temperature: 0.7 max_tokens: 4096 vlm: openai: model: "gpt-4o" image: siliconflow: model: "Kwai-Kolors/Kolors" size: "1024x1024"
- LLM Usage Guide - Complete LLM documentation
- API Reference - Full API reference
- llm.txt - Quick reference for AI assistants
Run tests for all providers:
# Test all LLMs (sync) python tests/test_llm.py # Test streaming python tests/test_llm_streaming.py # Test other modalities python tests/test_vlm.py python tests/test_image.py python tests/test_audio.py python tests/test_video.py
# Chat with a provider model-api-hub chat deepseek "Hello!" # List available providers model-api-hub list # Test a provider model-api-hub test deepseek
model_api_hub/
├── api/
│ ├── llm/ # Language Models (18+ providers)
│ ├── vlm/ # Vision-Language Models
│ ├── image/ # Image Generation
│ ├── audio/ # Text-to-Speech
│ └── video/ # Video Generation
├── utils/
│ └── config.py # Configuration management
├── cli.py # Command-line interface
└── __init__.py # Public API exports
We welcome contributions! See CONTRIBUTING.md for guidelines.
- Create a new file in
model_api_hub/api/llm/{provider}_llm.py - Implement
chat(),chat_stream()(optional), andcreate_client() - Add exports to
model_api_hub/api/llm/__init__.py - Add tests in
tests/test_llm.py - Update documentation
See llm.txt for detailed implementation guide.
Apache License 2.0 - see LICENSE file.
Thanks to all the AI providers for their amazing APIs!