Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

CrewAIMaster transforms any task description into a fully functional, production-ready multi-agent system, making advanced AI orchestration accessible to everyone.

License

Notifications You must be signed in to change notification settings

VishApp/crewaimaster

Repository files navigation

CrewAIMaster

A Python package for building intelligent multi-agent systems using CrewAI

CrewAIMaster is an advanced framework that automatically generates, manages, and executes multi-agent crews based on natural language task descriptions. It provides a CLI interface and comprehensive backend system for creating intelligent AI agents with memory, tools, and safety guardrails.

πŸ“¦ Installation

# install from PyPI (when available)
pip install crewaimaster
# Or Install from source (recommended for development)
git clone https://github.com/VishApp/crewaimaster
cd crewaimaster
python -m venv venv
source venv/bin/activate
pip install -e .

🎬 Demo

Click below to watch a complete walkthrough of CrewAIMaster in action

CrewAIMaster Demo

πŸ“Έ Screenshots

CLI Interface

CLI Interface

CrewAIMaster Providers

Crew Creation

CrewAIMaster Help

Execution Dashboard

πŸƒ Quick Start

Prerequisites

# Install Python 3.10+
python --version
# Configure your LLM provider (see supported providers)
crewaimaster providers
# Quick setup with OpenAI (most common)
crewaimaster providers --configure openai --api-key "your-openai-key" --model "gpt-4"

1. Create Your First Crew with AI Orchestration

# Create an intelligent crew using AI analysis
crewaimaster create "Write a comprehensive market analysis report for electric vehicles in 2024" --name electric_vehicles_market_analysis_crew

2. Execute the Crew

# Run the crew (requires configured LLM provider)
crewaimaster run electric_vehicles_market_analysis_crew
# With additional context:
crewaimaster run electric_vehicles_market_analysis_crew --input "Focus on Tesla, BMW, and Volkswagen specifically"

3. Alternative Execution (Direct Script)

Generated crews can also be executed directly using environment variables:

# Navigate to the generated crew directory
cd crews/electric_vehicles_market_analysis_crew
# Run using standard environment variables
export OPENAI_API_KEY="your-openai-key"
./run.sh "your input"
# Or run using CrewAIMaster-specific environment variables
export CREWAIMASTER_LLM_PROVIDER="openai"
export CREWAIMASTER_LLM_MODEL="gpt-4"
export CREWAIMASTER_LLM_API_KEY="your-openai-key"
export CREWAIMASTER_LLM_BASE_URL="https://api.openai.com/v1"

πŸ”„ Development Workflow

Typical CrewAIMaster Workflow

flowchart LR
 A["`**1. Task Definition**
 Natural Language Task`"] --> B["`**2. AI Analysis**
 πŸ€– Task Complexity
 🎯 Agent Requirements
 πŸ› οΈ Tool Selection`"]
 
 B --> C["`**3. Crew Creation**
 πŸ‘₯ Agent Design
 πŸ”§ Tool Assignment
 πŸ“‹ Task Orchestration`"]
 
 C --> D["`**4. Execution**
 πŸƒ Multi-Agent Coordination
 πŸ”„ Real-time Processing
 πŸ“Š Progress Monitoring`"]
 
 D --> E["`**5. Results & Analytics**
 πŸ“„ Output Generation
 πŸ“ˆ Performance Metrics
 πŸ’Ύ Persistent Storage`"]
 
 E --> F["`**6. Optimization**
 πŸ”§ Crew Modification
 ⚑ Performance Tuning
 πŸ“€ Export/Backup`"]
 
 F --> G["`**7. Reuse & Scale**
 πŸ”„ Crew Reusability
 πŸ“š Knowledge Building
 πŸš€ Production Deployment`"]
 classDef stepStyle fill:#f9f9f9,stroke:#333,stroke-width:2px,color:#333
 class A,B,C,D,E,F,G stepStyle
Loading

πŸ—οΈ Architecture

CrewAIMaster follows a clean, layered architecture designed for intelligent multi-agent system creation and execution:

flowchart TD
 %% User Entry Point
 User[πŸ‘€ User Input<br/>Natural Language Task] --> CLI[πŸ–₯️ CLI Interface<br/>crewaimaster create/run/providers]
 
 %% Configuration Layer
 CLI --> Config[βš™οΈ Configuration<br/>config.yaml<br/>LLM Providers]
 
 %% AI Orchestration Core
 CLI --> MasterAgent[🧠 Master Agent<br/>Intelligent Orchestrator]
 
 %% AI Analysis Pipeline
 MasterAgent --> TaskAnalyzer[πŸ“‹ Task Analyzer<br/>β€’ Complexity Assessment<br/>β€’ Requirements Extraction<br/>β€’ Agent Planning]
 
 TaskAnalyzer --> AgentDesigner[πŸ‘₯ Agent Designer<br/>β€’ Role Definition<br/>β€’ Tool Selection<br/>β€’ Capability Mapping]
 
 AgentDesigner --> CrewOrchestrator[🎭 Crew Orchestrator<br/>β€’ Team Assembly<br/>β€’ Process Selection<br/>β€’ Workflow Design]
 
 %% Core Generation Engine
 CrewOrchestrator --> CrewDesigner[πŸ”§ Crew Designer<br/>File-Based Generator]
 Config --> CrewDesigner
 
 CrewDesigner --> FileGen[πŸ“ File Generator<br/>β€’ Project Structure<br/>β€’ Python Modules<br/>β€’ YAML Configs]
 
 %% Output Generation
 FileGen --> GeneratedFiles{πŸ“„ Generated Crew Project}
 
 %% Generated Project Structure
 GeneratedFiles --> AgentYAML[agents.yaml<br/>Agent Definitions]
 GeneratedFiles --> TaskYAML[tasks.yaml<br/>Task Specifications]
 GeneratedFiles --> CrewPY[crew.py<br/>CrewAI Implementation]
 GeneratedFiles --> MainPY[main.py<br/>Execution Entry Point]
 
 %% Execution Runtime
 MainPY --> CrewAI[πŸš€ CrewAI Runtime<br/>Multi-Agent Execution]
 
 CrewAI --> AgentA[πŸ€– Agent A<br/>Specialized Role]
 CrewAI --> AgentB[πŸ€– Agent B<br/>Specialized Role]
 CrewAI --> AgentC[πŸ€– Agent C<br/>Specialized Role]
 
 %% Tool Integration
 AgentA --> Tools[πŸ› οΈ Tool Registry<br/>β€’ Web Search<br/>β€’ File Operations<br/>β€’ Code Execution<br/>β€’ Custom Tools]
 AgentB --> Tools
 AgentC --> Tools
 
 %% LLM Integration
 Config --> LLMProvider[πŸ”— LLM Provider<br/>β€’ OpenAI<br/>β€’ Anthropic<br/>β€’ Google<br/>β€’ Custom APIs]
 LLMProvider --> AgentA
 LLMProvider --> AgentB
 LLMProvider --> AgentC
 LLMProvider --> MasterAgent
 
 %% Memory & Knowledge
 CrewAI --> Memory[🧠 Memory System<br/>β€’ Agent Memory<br/>β€’ Shared Context<br/>β€’ Knowledge Base]
 
 %% Safety & Guardrails
 Tools --> Guardrails[πŸ›‘οΈ Guardrails<br/>β€’ Safety Checks<br/>β€’ Content Filtering<br/>β€’ Validation]
 
 %% Final Output
 CrewAI --> Results[πŸ“Š Results<br/>Task Completion<br/>Generated Content]
 
 %% Styling
 classDef userLayer fill:#e8f5e8,stroke:#1b5e20,stroke-width:3px,color:#000
 classDef cliLayer fill:#e1f5fe,stroke:#01579b,stroke-width:2px,color:#000
 classDef aiLayer fill:#f3e5f5,stroke:#4a148c,stroke-width:2px,color:#000
 classDef coreLayer fill:#fff8e1,stroke:#ff8f00,stroke-width:2px,color:#000
 classDef fileLayer fill:#fce4ec,stroke:#880e4f,stroke-width:2px,color:#000
 classDef runtimeLayer fill:#fff3e0,stroke:#e65100,stroke-width:2px,color:#000
 classDef toolLayer fill:#f1f8e9,stroke:#33691e,stroke-width:2px,color:#000
 
 class User userLayer
 class CLI,Config cliLayer
 class MasterAgent,TaskAnalyzer,AgentDesigner,CrewOrchestrator aiLayer
 class CrewDesigner,FileGen,LLMProvider coreLayer
 class GeneratedFiles,AgentYAML,TaskYAML,CrewPY,MainPY fileLayer
 class CrewAI,AgentA,AgentB,AgentC,Memory,Results runtimeLayer
 class Tools,Guardrails toolLayer
Loading

πŸ”„ Data Flow Explanation

  1. User Input: Natural language task description via CLI
  2. AI Analysis: Master Agent analyzes complexity and requirements
  3. Intelligent Design: AI agents design optimal crew composition
  4. Code Generation: Automated creation of CrewAI project files
  5. Execution: Generated crew runs with real-time coordination
  6. Results: Task completion with generated content and insights

πŸ›οΈ Architecture Overview

CrewAIMaster's architecture is designed for scalability, modularity, and intelligent automation:

🎯 User Interface Layer

  • CLI Interface: Rich terminal experience with typer and rich libraries
  • Command Processing: Handles user commands and provides interactive feedback
  • Input Validation: Ensures commands are properly formatted and validated

πŸ€– AI Orchestration Layer (Core Innovation)

  • MasterAgentCrew: Main orchestrator using AI agents for intelligent decision-making
  • TaskAnalyzerAgent: Advanced NLP analysis of user tasks and requirements
  • AgentDesignerAgent: Intelligent design of agents based on task requirements
  • CrewOrchestratorAgent: Optimizes crew composition and execution strategies

βš™οΈ Core Processing Layer

  • CrewDesigner: Handles CrewAI integration and agent instantiation
  • TaskAnalyzer: Legacy fallback for task analysis with pattern matching

πŸ› οΈ Tool Ecosystem

  • Tool Registry: Centralized management of all available tools
  • Available Tools: Comprehensive library of built-in and custom tools
  • Guardrails: Safety and validation systems for secure operation

πŸ”„ Execution Engine

  • CrewAI Engine: Core execution engine for running multi-agent crews
  • Agent Memory: Sophisticated memory management for agent learning and context

πŸ”„ Data Flow

  1. User Input β†’ CLI processes commands and validates input
  2. AI Analysis β†’ MasterAgentCrew analyzes task using specialized AI agents
  3. Crew Creation β†’ CrewDesigner instantiates agents with appropriate tools
  4. Execution β†’ CrewAI Engine runs the crew with real-time monitoring

πŸ› οΈ Configuration

LLM Provider Setup

CrewAIMaster uses a .crewaimaster/config.yaml configuration file for all settings. Environment variables are no longer supported - all configuration must be done via CLI commands or direct config file editing.

πŸ“‹ View Available Providers

# See all supported providers and configuration examples
crewaimaster providers

πŸš€ CLI Configuration (All Providers)

Configure any supported provider using the CLI:

OpenAI:

crewaimaster providers --configure openai --api-key "your-openai-key" --model "gpt-4"
# Automatically sets base_url to https://api.openai.com/v1

Anthropic:

crewaimaster providers --configure anthropic --api-key "your-anthropic-key" --model "claude-3-sonnet-20240229"
# Automatically sets base_url to https://api.anthropic.com/v1

Google:

crewaimaster providers --configure google --api-key "your-google-key" --model "gemini-pro"
# Automatically sets base_url to https://generativelanguage.googleapis.com/v1beta

DeepSeek:

crewaimaster providers --configure deepseek --api-key "your-deepseek-key" --model "deepseek-chat"
# Automatically sets base_url to https://api.deepseek.com/v1

Custom Provider:

crewaimaster providers --configure custom --api-key "your-key" --base-url "https://api.example.com/v1" --model "gpt-4o-mini"
# Requires explicit base_url for custom endpoints

🀝 Contributing

We welcome contributions! Here's how to get started:

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Make your changes and add tests
  4. Run tests: pytest tests/
  5. Commit changes: git commit -m 'Add amazing feature'
  6. Push to branch: git push origin feature/amazing-feature
  7. Open a Pull Request

Development Setup

# Clone and setup development environment
git clone https://github.com/VishApp/crewaimaster
cd crewaimaster
# Install development dependencies
pip install -e .

πŸ“„ License

CrewAIMaster is released under the MIT License. See LICENSE for details.

πŸ™ Acknowledgments

πŸ”— Links


Built with ❀️ for the AI community

AltStyle γ«γ‚ˆγ£γ¦ε€‰ζ›γ•γ‚ŒγŸγƒšγƒΌγ‚Έ (->γ‚ͺγƒͺγ‚ΈγƒŠγƒ«) /