A monorepo-based autonomous agent system that runs persistent, background AI agents with multi-provider LLM support (OpenAI, OpenRouter, Ollama), featuring DAG-based task execution and a modern web interface.
- π€ Autonomous Agents - LLM-powered decision making with tool selection
- π DAG Execution - Directed Acyclic Graph task decomposition and parallel execution
- β° Scheduled Tasks - Cron-based scheduling with timezone support
- πΎ Persistent State - SQLite database maintains state across restarts
- π οΈ Tool System - Extensible tool registry (web search, fetch, file ops, webhooks, email)
- π― Goal-Oriented - High-level objectives drive multi-step autonomous plans
- π Multi-Provider LLM - Support for OpenAI, OpenRouter, and Ollama (local models)
- π Web Dashboard - Modern SvelteKit interface for monitoring and management
- π‘ Real-time Events - Server-Sent Events (SSE) for live execution updates
- β Tool Calling Validation - Startup checks ensure model compatibility
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Clients β
ββββββββββββ¬βββββββββββββββ¬βββββββββββββββ¬ββββββββββββββββ€
β CLI β Web App β REPL β API Client β
β β (SvelteKit) β β (JS/Python) β
ββββββ¬ββββββ΄βββββββ¬ββββββββ΄βββββββ¬ββββββββ΄ββββββββ¬ββββββββ
β β β β
ββββββββββββββ΄βββββββββββββββ΄ββββββββββββββββ
β
βββββββββΌβββββββββ
β Backend API β
β (Fastify) β
βββββββββ¬βββββββββ
β
ββββββββββββββββββββΌβββββββββββββββββββ
β β β
ββββββΌβββββ βββββββΌββββββ βββββββΌββββββ
β DAG β β Agent β β Tool β
βSchedulerβ β Runtime β β Registry β
ββββββ¬βββββ βββββββ¬ββββββ βββββββββββββ
β β
β ββββββββββββββ
β β
ββββββΌβββββΌββββ
β SQLite DB β
β (Drizzle) β
βββββββββββββββ
- Node.js >= 18.0.0
- pnpm >= 8.0.0
- One of:
- OpenAI API key
- OpenRouter API key
- Ollama running locally
# Clone the repository git clone https://github.com/ugmurthy/asyncAgent.git cd asyncAgent # Install dependencies pnpm install # Set up environment cp .env.example .env # Edit .env with your LLM provider settings # Build all packages pnpm build
Edit .env with your preferred LLM provider:
For OpenAI:
LLM_PROVIDER=openai OPENAI_API_KEY=sk-your-key-here OPENAI_MODEL=gpt-4o
For OpenRouter:
LLM_PROVIDER=openrouter OPENROUTER_API_KEY=sk-or-your-key-here OPENROUTER_MODEL=anthropic/claude-3.5-sonnet
For Ollama (Local):
LLM_PROVIDER=ollama OLLAMA_BASE_URL=http://localhost:11434 OLLAMA_MODEL=mistral
# Start backend in development mode pnpm dev # Start web application (in another terminal) pnpm --filter @async-agent/webapp dev
- Backend API:
http://localhost:3000 - Web Dashboard:
http://localhost:5173
asyncAgent/
βββ packages/
β βββ backend/ # Fastify API + Agent Runtime
β β βββ src/
β β β βββ app/ # Server and routes
β β β βββ agent/ # DAG executor, planner, providers
β β β βββ scheduler/ # DAG scheduling
β β β βββ db/ # Database schema and migrations
β β β βββ events/ # Event bus for SSE
β β βββ package.json
β β
β βββ webApp/ # SvelteKit Web Interface
β β βββ src/
β β β βββ routes/ # File-based routing
β β β βββ lib/ # Components, stores, utilities
β β βββ package.json
β β
β βββ shared/ # Shared types, schemas, utilities
β β βββ src/
β β βββ js-client/ # Auto-generated JS API client
β β βββ python-client/ # Auto-generated Python API client
β β
β βββ cli/ # Command-line interface
β βββ repl/ # Interactive REPL
β βββ tui/ # Terminal UI
β
βββ openapi.yaml # API specification
βββ .env.example # Environment template
βββ pnpm-workspace.yaml # Workspace configuration
βββ package.json # Root package
The agent runtime includes these built-in tools:
| Tool | Description |
|---|---|
web_search |
Search the web using DuckDuckGo |
fetch_page |
Fetch and extract content from a URL |
fetch_urls |
Fetch multiple URLs in parallel |
write_file |
Write content to a file |
read_file |
Read content from a file |
send_webhook |
Send HTTP webhook requests |
send_email |
Send emails (requires SMTP configuration) |
Base URL: http://localhost:3000/api/v1
| Endpoint | Description |
|---|---|
GET /health |
Health check |
GET /health/ready |
Readiness with LLM and scheduler status |
| Endpoint | Description |
|---|---|
POST /api/v1/goals |
Create a new goal |
GET /api/v1/goals |
List all goals |
POST /api/v1/goals/:id/run |
Trigger goal execution |
GET /api/v1/runs |
List all runs |
GET /api/v1/runs/:id/steps |
Get execution steps |
| Endpoint | Description |
|---|---|
POST /api/v1/create-dag |
Create a DAG from goal text |
POST /api/v1/execute-dag |
Execute a DAG |
POST /api/v1/resume-dag/:id |
Resume a suspended DAG |
GET /api/v1/dags |
List all DAGs |
GET /api/v1/dag-executions |
List DAG executions |
GET /api/v1/dag-executions/:id/events |
Stream execution events (SSE) |
| Endpoint | Description |
|---|---|
POST /api/v1/task |
Execute a task with an agent |
GET /api/v1/tools |
List available tools |
GET /api/v1/agents |
List agents |
See openapi.yaml for complete API specification.
# Install dependencies pnpm install # Run backend in dev mode (hot reload) pnpm dev # Run web app in dev mode pnpm --filter @async-agent/webapp dev # Build all packages pnpm build # Run tests pnpm test # Clean all build outputs pnpm clean # Generate API clients from OpenAPI spec pnpm generate
# Generate migrations (after schema changes) pnpm --filter backend db:generate # Push schema changes to database pnpm --filter backend db:push # Open Drizzle Studio (DB GUI) pnpm --filter backend db:studio
- Models: gpt-4, gpt-4-turbo, gpt-4o, gpt-3.5-turbo (1106+)
- Validation: Whitelist check for tool calling support
- Setup: Requires
OPENAI_API_KEY
- Models: All models with function calling support
- Validation: Runtime API check for capabilities
- Setup: Requires
OPENROUTER_API_KEY
- Models: mistral, mixtral, llama2 (tool support variants)
- Validation: Sample tool call test at startup
- Setup: Requires Ollama server running at
OLLAMA_BASE_URL
The system validates tool calling support at startup and fails fast if the selected model is incompatible.
Create a DAG that searches and summarizes AI news:
curl -X POST http://localhost:3000/api/v1/create-dag \ -H "Content-Type: application/json" \ -d '{ "goal": "Search for the latest AI news, summarize the top 5 articles, and save to a markdown file", "schedule": "0 9 * * *", "timezone": "America/New_York" }'
The agent will autonomously:
- Decompose the goal into a DAG of tasks
- Search the web for recent AI news
- Fetch and extract content from articles
- Summarize findings using the LLM
- Save results to a markdown file
- Execute on schedule (daily at 9 AM ET)
- Fastify - Fast HTTP server with schema validation
- Drizzle ORM - Type-safe SQLite ORM
- node-cron - Job scheduling
- OpenAI SDK - LLM integration
- Pino - Structured logging
- SvelteKit 5 - Full-stack framework
- TailwindCSS - Utility-first CSS
- bits-ui - Accessible UI components
- Lucide Svelte - Icons
- TypeScript - Type safety across packages
- Zod - Runtime validation
- pnpm workspaces - Monorepo management
- Monorepo setup with pnpm workspaces
- Multi-provider LLM abstraction layer
- Shared types and schemas
- Backend API structure
- Database schema and migrations
- DAG executor and planner
- Tool registry and core tools
- DAG scheduler with cron support
- Auto-generated API clients (JS/Python)
- Web dashboard (SvelteKit)
- Real-time events (SSE)
- Suspended state and resume functionality
- CLI improvements
- WebSocket support for bidirectional communication
- Agent memory and learning
- Plugin system for custom tools
MIT
Contributions welcome! See AGENTS.md for development guidelines.