Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

One Line of Code, Access to 100+ AI Models | 一行代码,访问100个AI模型

License

Notifications You must be signed in to change notification settings

sanbuphy/model-api-hub

Repository files navigation

Model API Hub Logo

Model API Hub

One Line of Code, Access to 100+ AI Models

GitHub release License GitHub Actions Python PyPI version

English · 简体中文


What is Model API Hub?

Model API Hub is a unified Python SDK that lets you access multiple AI model APIs across different platforms and modalities with a consistent, simple interface.

Stop juggling different SDKs for each provider. Use one library for everything.

# Same interface, different providers
from model_api_hub import deepseek_chat, siliconflow_chat, kimi_chat
# All work the same way
response = deepseek_chat("Hello!")
response = siliconflow_chat("Hello!")
response = kimi_chat("Hello!")

Key Features

Feature Description
18+ LLM Providers OpenAI, Anthropic, DeepSeek, ZhipuAI, Kimi, SiliconFlow, and more
Streaming Support Real-time streaming responses for all major providers
5 Modalities LLM, Vision-Language, Image Gen, Audio TTS, Video Gen
One-Line Setup pip install model-api-hub and you're ready
Unified API Same interface across all providers
Flexible Config .env, YAML, or direct API keys
CLI Included Test models directly from command line
Type Hints Full type safety support

Quick Start

Installation

pip install model-api-hub

1. Set Your API Key

# Create .env file
echo 'DEEPSEEK_API_KEY=your_key_here' > .env

2. Start Coding

from model_api_hub import deepseek_chat
# That's it. You're done.
response = deepseek_chat("Explain quantum computing in simple terms")
print(response)

Usage Examples

Language Models (LLM)

Synchronous Chat

from model_api_hub import deepseek_chat, kimi_chat, siliconflow_chat, stepfun_chat
# DeepSeek
response = deepseek_chat(
 "Write a Python function to sort a list",
 system_prompt="You are a coding expert."
)
# Kimi (Moonshot)
response = kimi_chat(
 "Summarize this article",
 temperature=0.5
)
# SiliconFlow - access 50+ models
response = siliconflow_chat("Hello!", model="deepseek-ai/DeepSeek-V3")
# StepFun - OpenAI compatible endpoint
response = stepfun_chat(
 "你好,请介绍一下阶跃星辰的人工智能!",
 system_prompt=(
 "你是由阶跃星辰提供的AI聊天助手,你擅长中文、英文以及多种其他语言的对话。"
 "在保证用户数据安全的前提下,你能对用户的问题和请求作出快速和精准的回答。"
 "同时,你的回答和建议应该拒绝黄赌毒、暴力恐怖主义的内容。"
 ),
)

Streaming Chat

from model_api_hub import deepseek_chat_stream
# Stream responses in real-time
for chunk in deepseek_chat_stream("Tell me a long story"):
 print(chunk, end="", flush=True)

Multi-turn Conversation

from model_api_hub.api.llm.deepseek_llm import create_client, get_completion
client = create_client()
messages = [
 {"role": "system", "content": "You are a helpful assistant."},
 {"role": "user", "content": "What is Python?"},
 {"role": "assistant", "content": "Python is a programming language..."},
 {"role": "user", "content": "What are its main features?"}
]
response = get_completion(client, messages)

Vision-Language Models (VLM)

from model_api_hub.api.vlm.openai_vlm import chat
response = chat(
 prompt="What's in this image?",
 image_path="photo.jpg"
)

Image Generation

from model_api_hub.api.image.siliconflow_image_gen import generate
image_url = generate("A beautiful sunset over mountains")

Text-to-Speech

from model_api_hub.api.audio.openai_tts import synthesize
audio = synthesize("Hello, world!", voice="alloy", output_path="hello.mp3")

Supported Models

Language Models (LLM)

Vision-Language Models (VLM)

Image Generation Models

Audio Models

Video Generation Models


Configuration

Environment Variables (.env)

Create a .env file in your project root:

# LLM Providers
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-...
DEEPSEEK_API_KEY=sk-...
KIMI_API_KEY=sk-...
ZHIPUAI_API_KEY=...
SILICONFLOW_API_KEY=sk-...
MINIMAX_API_KEY=...
YIYAN_API_KEY=...
DASHSCOPE_API_KEY=sk-...
MODELSCOPE_API_KEY=ms-...
XUNFEI_SPARK_API_KEY=...
GROQ_API_KEY=gsk_...
TOGETHER_API_KEY=...
MISTRAL_API_KEY=...
COHERE_API_KEY=...
PERPLEXITY_API_KEY=pplx-...
AZURE_OPENAI_API_KEY=...
STEP_API_KEY=...
# Other Services
ELEVENLABS_API_KEY=...
AZURE_SPEECH_KEY=...
STABILITY_API_KEY=...
RECRAFT_API_KEY=...
RUNWAY_API_KEY=...
LUMA_API_KEY=...

YAML Configuration

Create config.yaml:

llm:
 openai:
 model: "gpt-4o"
 temperature: 0.7
 max_tokens: 4096
 
 deepseek:
 model: "deepseek-chat"
 temperature: 0.7
 max_tokens: 4096
vlm:
 openai:
 model: "gpt-4o"
 
image:
 siliconflow:
 model: "Kwai-Kolors/Kolors"
 size: "1024x1024"

Documentation


Testing

Run tests for all providers:

# Test all LLMs (sync)
python tests/test_llm.py
# Test streaming
python tests/test_llm_streaming.py
# Test other modalities
python tests/test_vlm.py
python tests/test_image.py
python tests/test_audio.py
python tests/test_video.py

CLI Usage

# Chat with a provider
model-api-hub chat deepseek "Hello!"
# List available providers
model-api-hub list
# Test a provider
model-api-hub test deepseek

Architecture

model_api_hub/
├── api/
│ ├── llm/ # Language Models (18+ providers)
│ ├── vlm/ # Vision-Language Models
│ ├── image/ # Image Generation
│ ├── audio/ # Text-to-Speech
│ └── video/ # Video Generation
├── utils/
│ └── config.py # Configuration management
├── cli.py # Command-line interface
└── __init__.py # Public API exports

Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

Adding a New Provider

  1. Create a new file in model_api_hub/api/llm/{provider}_llm.py
  2. Implement chat(), chat_stream() (optional), and create_client()
  3. Add exports to model_api_hub/api/llm/__init__.py
  4. Add tests in tests/test_llm.py
  5. Update documentation

See llm.txt for detailed implementation guide.


License

Apache License 2.0 - see LICENSE file.


Support


Acknowledgments

Thanks to all the AI providers for their amazing APIs!

About

One Line of Code, Access to 100+ AI Models | 一行代码,访问100个AI模型

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

AltStyle によって変換されたページ (->オリジナル) /