Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Run AI code instantly — sandboxed, automated, effortless with Auto-Fix!

License

Notifications You must be signed in to change notification settings

Ark0N/AI-Code-Executor

Repository files navigation




⚡ The Difference

😩 Traditional Workflow

1. Ask AI for code
2. Copy code manually
3. Open terminal
4. Paste and run
5. See error
6. Copy error back to AI
7. Repeat 5-10 times...
⏱️ ~5 minutes per task

⚡ AI Code Executor

1. Ask AI for code
✅ Auto-executed in Docker
✅ Errors auto-detected
✅ Auto-fixed (up to 10x)
✅ Results displayed
✅ Full terminal access
✅ Export & share containers
⏱️ ~30 seconds per task



🐳 Docker Containers

Every Conversation Gets Its Own Isolated Container

When you start a new chat, a dedicated Docker container is automatically created just for that conversation. Your code runs in complete isolation - safe, secure, and reproducible.

📦 Pre-installed Tools

Languages Package Managers Build Tools
🐍 Python 3.11 📦 pip / pip3 🔧 gcc / g++
🟢 Node.js 18 📦 npm / yarn 🔧 make / cmake
🦀 Rust (rustc) 📦 cargo 🔧 git
💎 Ruby 📦 gem 🔧 curl / wget
🐹 Go 📦 go mod 🔧 vim / nano
☕ Java (OpenJDK) 📦 maven 🔧 jq / yq

⚡ Container Features

Feature Description
💻 Full Bash Shell Complete Linux environment
🌐 Internet Access Download packages, fetch data
📁 Persistent Files Files persist during conversation
📦 Exportable Save entire container as portable image

🎛️ Resource Limits (Configurable)

Resource Range Default
CPU Cores 1-16 2
Memory 512MB - 32GB 8GB
Storage 1GB - 100GB 10GB
Timeout 0-3600s 30s (0 = unlimited)

🖥️ Built-in Web Terminal

Full terminal access directly in your browser - no SSH needed, no port forwarding, just click and type.

root@container:~$ python --version
Python 3.11.9
root@container:~$ node --version
v18.19.0
root@container:~$ pip install pandas numpy matplotlib
Successfully installed pandas-2.2.0 numpy-1.26.4 matplotlib-3.8.3
root@container:~$ ls -la
total 16
drwxr-xr-x 1 root root 4096 Jan 15 10:30 .
-rw-r--r-- 1 root root 2048 Jan 15 10:30 script.py
-rw-r--r-- 1 root root 8192 Jan 15 10:30 data.csv

Terminal Features:

  • 🪟 Multi-tab support - One terminal per conversation
  • ↔️ Drag & resize - Floating window you can move around
  • 🎨 Full color support - Syntax highlighting, colored output
  • ⌨️ Keyboard shortcuts - Ctrl+C, Ctrl+D, arrow keys, tab completion
  • 📜 Scrollback history - Review previous commands
  • 🔄 Persistent session - Terminal stays open while chatting

📦 Docker Image Export

Take your work anywhere - Export any conversation's container as a portable Docker image.

Built something cool? Export it and run it on another machine, share it with colleagues, or keep it as a backup.

How It Works

  1. Click the 🐳 button on any conversation
  2. Confirm the export
  3. Download the .tar file from the Images panel

On Another Machine

# Load the exported image
docker load < my-project_2025年01月15日_143052.tar
# Run it
docker run -it my-project_2025年01月15日_143052:latest bash
# You're back in your exact environment!
root@container:/workspace$ ls
script.py data.csv results/

Features

Feature Description
🐳 One-Click Export Export button on every conversation
📁 Image Manager View, download, delete exported images
⚙️ Custom Path Configure export location in Settings
📦 Full Environment Includes all files, packages, and state

Configuration

Set custom export path in Settings → Docker → Image Export Path
Or via environment variable: DOCKER_EXPORT_PATH=./docker_images_exported




🤖 AI Providers

6 AI Providers - Choose Your Favorite (or Use Them All!)

🟣 Anthropic

Claude

Claude 4 Opus Claude 4 Sonnet Claude 3.5 Sonnet Claude 3.5 Haiku

Best reasoning

🟢 OpenAI

GPT

GPT-5.1 GPT-5 / Mini GPT-4.1 / Mini GPT-4o / Mini

Latest & greatest

🔵 Google

Gemini

Gemini 2.5 Pro Gemini 2.5 Flash Gemini 2.5 Flash-Lite Gemini 2.0 Flash

Fast & affordable

⚫ Ollama

Ollama

Llama 3 (8B/70B) CodeLlama Mistral DeepSeek Coder

100% FREE 100% Private

💻 LM Studio

LM Studio

Any GGUF Model Qwen Coder DeepSeek Llama 3

GUI Model Manager 100% FREE

🎤 Whisper

Whisper

Local Whisper Remote GPU Server

Talk to code

🦙 Ollama Integration - Free Local AI

Run AI completely locally - no API keys, no costs, no data leaving your machine.

# Install Ollama (one command)
curl -fsSL https://ollama.ai/install.sh | sh # Linux
brew install ollama # macOS
# Pull models
ollama pull llama3 # General purpose (8B)
ollama pull llama3:70b # More powerful (70B)
ollama pull codellama # Optimized for code
ollama pull deepseek-coder # Code specialist
ollama pull mistral # Fast & efficient
# Models auto-detected in AI Code Executor!

Ollama Features:

  • Auto-detection - Models appear in dropdown automatically
  • No API key needed - Just install and go
  • Offline capable - Works without internet
  • Privacy first - Your code never leaves your machine
  • Custom models - Import any GGUF model

💻 LM Studio Integration - GUI-Based Local AI

Run local AI models with a beautiful GUI for model management. Perfect if you prefer a visual interface over command-line.

Setup:

  1. Download LM Studio from lmstudio.ai
  2. Download models using the built-in model browser
  3. Start the server:
    • Go to "Local Server" tab (left sidebar)
    • Enable "Serve on Local Network" for remote access
    • Click "Start Server"
  4. Configure in AI Code Executor:
    • Settings → LM Studio Host URL
    • Local: http://localhost:1234
    • Remote: http://192.168.x.x:1234

Recommended Models for Coding:

qwen2.5-coder-32b-instruct # Excellent for code (32B)
deepseek-coder-v2 # Fast code specialist
codellama-34b # Meta's code model

LM Studio Features:

  • Visual model browser - Download models with one click
  • GPU acceleration - Automatic CUDA/Metal detection
  • Model parameters - Adjust temperature, context length, etc.
  • Chat interface - Test models before using in app
  • Cross-platform - Windows, macOS, Linux

Network Access (Windows Firewall):

# Run as Administrator to allow remote connections
New-NetFirewallRule -DisplayName "LM Studio API" -Direction Inbound -LocalPort 1234 -Protocol TCP -Action Allow

🎤 Whisper Voice Integration

Talk instead of type - Whisper transcribes your voice to text in real-time.

Option Description
🖥️ Local Whisper Runs on your machine, requires Python + openai-whisper, works offline
🚀 Remote GPU Server Point to your Whisper server for faster GPU-accelerated transcription

Configuration: Settings → Features → Whisper Server URL
Example: http://192.168.1.100:9000




🔧 Auto-Fix System

Intelligent Error Detection & Automatic Repair

When your code fails, AI Code Executor doesn't just show you the error - it automatically fixes it.

Example: Auto-Fix in Action

You: "Create a stock analysis dashboard"

Step Action Result
1 🤖 AI generates code Code created
2 ⚡ Executing in Docker ModuleNotFoundError: No module named 'pandas'
3 🔧 Auto-fix 1/10 Installing pandas...
4 ⚡ Re-executing ModuleNotFoundError: No module named 'yfinance'
5 🔧 Auto-fix 2/10 Installing yfinance...
6 ⚡ Re-executing ModuleNotFoundError: No module named 'plotly'
7 🔧 Auto-fix 3/10 Installing plotly...
8 ⚡ Re-executing SUCCESS! Dashboard displayed

Auto-Fix Configuration

Setting Description Range Default
🔧 Max Attempts How many times to retry fixing errors 1-20 10
⏱️ Execution Timeout Maximum time per code execution 0-3600s 30s (0 = unlimited)

Location: Settings → Features

Custom Auto-Fix Prompt

Full control over how AI analyzes and fixes errors:

The code execution failed with the following error:
{errors}
Analyze the error carefully and provide ONLY the fixed code.
Do not explain - just provide working code.
If dependencies are missing, install them first with pip/npm.

💡 Use {errors} placeholder - it gets replaced with actual error output.

Location: Settings → Prompts → Auto-Fix Prompt Template




⚙️ Configuration

📝 Custom System Prompts

Shape how AI writes code for you:

Default prompt (optimized for code execution):

You are a professional coder who provides complete, executable code solutions. 
Present only code, no explanatory text. Present code blocks in execution order. 
If dependencies are needed, install them first using a bash script.

Example customizations you can add:

  • "Always use Python 3.11 features"
  • "Prefer async/await patterns"
  • "Include comprehensive error handling"
  • "Add logging to all functions"
  • "Use type hints everywhere"
  • "Write unit tests for all code"

Location: Settings → Prompts → System Prompt

🔑 API Keys Configuration

Provider Key Format Get Your Key
🟣 Anthropic (Claude) sk-ant-api03-... console.anthropic.com
🟢 OpenAI (GPT) sk-... platform.openai.com
🔵 Google (Gemini) AIza... makersuite.google.com
Ollama Auto-detected ollama.ai
💻 LM Studio Auto-detected lmstudio.ai
🎤 Whisper (Optional) Server URL Self-hosted

Location: Settings → API Keys

🐳 Docker Resource Limits

Setting Range Default Description
CPU Cores 1-16 2 CPU cores per container
Memory 512m-32g 8g RAM limit per container
Storage 1g-100g 10g Disk space per container
Network On/Off On Allow internet access

Actions:

  • 🗑️ Stop All Containers - Stop all running containers
  • 🧹 Cleanup Unused - Remove stopped containers

Location: Settings → Docker

Environment Variables (.env)

# ═══════════════════════════════════════════════════════════════════════════
# 🔑 API KEYS
# ═══════════════════════════════════════════════════════════════════════════
ANTHROPIC_API_KEY=sk-ant-... # Claude models
OPENAI_API_KEY=sk-... # GPT models
GEMINI_API_KEY=AIza... # Gemini models
# ═══════════════════════════════════════════════════════════════════════════
# 🦙 OLLAMA (Local AI)
# ═══════════════════════════════════════════════════════════════════════════
OLLAMA_HOST=http://localhost:11434 # Local or remote Ollama server
# ═══════════════════════════════════════════════════════════════════════════
# 💻 LM STUDIO (Local AI)
# ═══════════════════════════════════════════════════════════════════════════
LMSTUDIO_HOST=http://localhost:1234 # Local or remote LM Studio server
# ═══════════════════════════════════════════════════════════════════════════
# 🎤 WHISPER (Voice Input)
# ═══════════════════════════════════════════════════════════════════════════
WHISPER_SERVER_URL= # Remote Whisper GPU server (optional)
# ═══════════════════════════════════════════════════════════════════════════
# ⚡ EXECUTION SETTINGS
# ═══════════════════════════════════════════════════════════════════════════
DOCKER_EXECUTION_TIMEOUT=30 # Seconds (0 = unlimited)
AUTO_FIX_MAX_ATTEMPTS=10 # Retry attempts (1-20)
# ═══════════════════════════════════════════════════════════════════════════
# 🐳 DOCKER RESOURCE LIMITS
# ═══════════════════════════════════════════════════════════════════════════
DOCKER_CPU_CORES=2 # 1-16 cores
DOCKER_MEMORY_LIMIT=8g # 512m-32g RAM
DOCKER_STORAGE_LIMIT=10g # 1g-100g disk
DOCKER_EXPORT_PATH=./docker_images_exported # Where exported images are saved
# ═══════════════════════════════════════════════════════════════════════════
# 📝 PROMPTS (Customize AI behavior)
# ═══════════════════════════════════════════════════════════════════════════
SYSTEM_PROMPT=You are a professional coder...
AUTO_FIX_PROMPT=The code failed with:\n\n{errors}\n\nProvide fixed code only.
# ═══════════════════════════════════════════════════════════════════════════
# 🌐 SERVER
# ═══════════════════════════════════════════════════════════════════════════
HOST=0.0.0.0
PORT=8000



📁 File Management

Upload, Browse, Download - All In Browser

File Size Actions
📄 script.py 2.4 KB 👁️ View · ⬇️ Download
📄 data.csv 156 KB 👁️ View · ⬇️ Download
📄 requirements.txt 0.3 KB 👁️ View · ⬇️ Download
📄 output.json 12 KB 👁️ View · ⬇️ Download
📁 results/ → Browse
📄 chart.png 89 KB 👁️ View · ⬇️ Download

Actions: 📤 Upload Files · 📥 Download All as ZIP

Features:

  • 📤 Drag & drop upload into containers
  • 👁️ Syntax-highlighted file preview
  • ⬇️ Download individual files
  • 📥 Bulk download as ZIP
  • ↩️ Send output to AI input (one-click)
  • 🔒 Large file protection (>1MB shows warning)



📱 Mobile Support

Fully Responsive - Works on Any Device

Mobile Interface

Mobile Features:

  • 📱 Touch-optimized interface
  • 🍔 Collapsible sidebar
  • ⌨️ Keyboard-aware input area
  • 🎤 Voice input support




🚀 Quick Start

# Clone
git clone https://github.com/Ark0N/AI-Code-Executor.git
or
wget https://github.com/Ark0N/AI-Code-Executor/archive/refs/heads/main.zip
unzip main.zip
cd AI-Code-Executor
# Install (auto-detects OS & container runtime)
chmod +x INSTALL.sh && ./INSTALL.sh
# Start
./start.sh



🐳 Docker Deployment

Run AI Code Executor entirely in Docker - no local Python installation required!

Quick Docker Start

# Clone the repository
git clone https://github.com/Ark0N/AI-Code-Executor.git
cd AI-Code-Executor
# Create .env file with your API keys
cp .env.example .env
# Edit .env and add your API keys (ANTHROPIC_API_KEY, OPENAI_API_KEY, etc.)
# Start with Docker Compose
docker compose up -d
# View logs
docker compose logs -f
# Open http://localhost:8000

Docker Files Overview

File Purpose
Dockerfile Sandbox container image (where user code runs)
Dockerfile.app Main application container
docker-compose.yml Complete deployment configuration
.dockerignore Files excluded from build context

How It Works

The application uses Docker-in-Docker (via socket mounting):

┌─────────────────────────────────────────────────────────────┐
│ Host Machine │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ AI Code Executor Container │ │
│ │ - FastAPI Backend │ │
│ │ - Web Frontend │ │
│ │ - Docker CLI (talks to host Docker) │ │
│ └───────────────────────────────────────────────────────┘ │
│ │ │
│ │ /var/run/docker.sock │
│ ▼ │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ Docker Daemon │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │
│ │ │ Sandbox #1 │ │ Sandbox #2 │ │ Sandbox #N │ │ │
│ │ │ (Conv. 1) │ │ (Conv. 2) │ │ (Conv. N) │ │ │
│ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │
│ └───────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘

Environment Variables

Create a .env file in the project root:

# Required - At least one AI provider
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
GEMINI_API_KEY=AIza...
# Optional - Server settings
PORT=8000
# Optional - Docker resource limits for code execution
DOCKER_CPU_CORES=2
DOCKER_MEMORY_LIMIT=8g
DOCKER_STORAGE_LIMIT=10g
DOCKER_EXECUTION_TIMEOUT=30
# Optional - Ollama (if running on host)
OLLAMA_HOST=http://host.docker.internal:11434
# Optional - Remote Whisper server
WHISPER_SERVER_URL=

Docker Commands Reference

# Start in background
docker compose up -d
# Start with build (after code changes)
docker compose up -d --build
# View logs
docker compose logs -f
# View logs for specific service
docker compose logs -f ai-code-executor
# Stop containers
docker compose down
# Stop and remove volumes (deletes data!)
docker compose down -v
# Restart
docker compose restart
# Check status
docker compose ps
# Shell into running container
docker exec -it ai-code-executor bash
# Build sandbox image manually
docker build -t ai-code-executor:latest .
# Build app image manually 
docker build -f Dockerfile.app -t ai-code-executor-app:latest .

Connecting to Ollama (Local AI)

If you're running Ollama on your host machine:

macOS / Windows (Docker Desktop):

# Ollama is accessible at host.docker.internal automatically
OLLAMA_HOST=http://host.docker.internal:11434

Linux:

# The docker-compose.yml includes extra_hosts for this
OLLAMA_HOST=http://host.docker.internal:11434
# Or use your machine's IP
OLLAMA_HOST=http://192.168.1.100:11434

Data Persistence

Docker Compose creates named volumes for persistent data:

Volume Purpose Path in Container
ai-executor-data Database (conversations, settings) /app/data
ai-executor-exports Exported Docker images /app/docker_images_exported

To backup your data:

# Backup database
docker run --rm -v ai-executor-data:/data -v $(pwd):/backup alpine \
 tar cvf /backup/ai-executor-backup.tar /data
# Restore database
docker run --rm -v ai-executor-data:/data -v $(pwd):/backup alpine \
 tar xvf /backup/ai-executor-backup.tar -C /

Troubleshooting Docker

Permission denied on Docker socket:

# On Linux, add your user to docker group
sudo usermod -aG docker $USER
newgrp docker
# Or run with sudo
sudo docker compose up -d

Sandbox image not building:

# Build manually
docker build -t ai-code-executor:latest .
# Check if image exists
docker images | grep ai-code-executor

Container can't reach Ollama:

# Verify Ollama is running
curl http://localhost:11434/api/tags
# Test from inside container
docker exec -it ai-code-executor curl http://host.docker.internal:11434/api/tags

Port already in use:

# Change port in .env
PORT=8001
# Or stop conflicting service
sudo lsof -i :8000



📦 Installation

Supported Platforms

Platform Status Notes
Ubuntu / Debian apt
Fedora / RHEL dnf
Arch / Manjaro pacman
macOS Intel Homebrew
macOS Apple Silicon M1/M2/M3/M4
Windows WSL2 Ubuntu recommended

Container Runtimes

Runtime Status
Docker Desktop ✅ Recommended
Docker Engine
Podman
Colima

🍎 macOS

# Install Homebrew (if needed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install Docker
brew install --cask docker # Docker Desktop
# OR
brew install colima docker && colima start # Colima (lightweight)

🐧 Linux

# Install Docker
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
newgrp docker



🔒 Security

  • Isolated Containers - Each chat runs in separate Docker container
  • Resource Limits - CPU, memory, storage caps prevent abuse
  • API Key Encryption - Keys stored encrypted in database
  • No Host Access - Code cannot escape container sandbox
  • Auto Cleanup - Containers removed when done
  • Network Control - Optional internet access restriction



📁 Project Structure

AI-Code-Executor/
├── backend/
│ ├── main.py # FastAPI app, auto-fix logic, endpoints
│ ├── code_executor.py # Docker container management
│ ├── anthropic_client.py # Claude API integration
│ ├── openai_client.py # GPT API integration
│ ├── gemini_client.py # Gemini API integration
│ ├── ollama_client.py # Local Ollama integration
│ ├── lmstudio_client.py # Local LM Studio integration
│ ├── whisper_client.py # Local Whisper voice input
│ ├── whisper_remote.py # Remote Whisper GPU server
│ └── database.py # SQLite async ORM
│
├── frontend/
│ ├── index.html # Main UI
│ ├── app.js # Application logic
│ └── style.css # Styling
│
├── whisper/ # Standalone Whisper server
├── docs/ # Documentation
├── scripts/ # Utility scripts
│
├── Dockerfile # Sandbox container (code execution)
├── Dockerfile.app # Application container
├── docker-compose.yml # Docker Compose deployment
├── .dockerignore # Docker build exclusions
├── INSTALL.sh # Universal installer
├── start.sh # Start server
├── requirements.txt # Python dependencies
└── .env.example # Configuration template



🤝 Contributing

  1. Fork the repository
  2. Create feature branch
  3. Make changes
  4. Submit pull request

See CONTRIBUTING.md




📝 License

MIT License - see LICENSE




Star

If AI Code Executor saves you time, show some love!


GitHub stars GitHub forks



Built With

FastAPI Docker Python JavaScript




Made with ❤️ for developers who want AI that actually runs code

Coded with help of Claude AI (Sonnet 4.5 and Opus 4.5)

© 2025 AI Code Executor


About

Run AI code instantly — sandboxed, automated, effortless with Auto-Fix!

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

AltStyle によって変換されたページ (->オリジナル) /