Quick Start Documentation Examples Join Discord
PyPI Version Python Versions Monthly Downloads License GitHub Stars
Easy agent building, for no one but YOU: Create deployable complex agents using simple, Pythonic style interface with natural control flow.
import railtracks as rt # Define a tool (just a function!) def get_weather(location: str) -> str: return f"It's sunny in {location}!" # Create an agent with tools agent = rt.agent_node( "Weather Assistant", tool_nodes=(rt.function_node(get_weather)), llm=rt.llm.OpenAILLM("gpt-4o"), system_message="You help users with weather information." ) # Run it result = await rt.call(agent, "What's the weather in Paris?") print(result.text) # "Based on the current data, it's sunny in Paris!"
That's it. No complex configurations, no learning proprietary syntax. Just Python.
# Write agents like regular functions @rt.function_node def my_tool(text: str) -> str: return process(text)
- ✅ No YAML, no DSLs, no magic strings
- ✅ Use your existing debugging tools
- ✅ IDE autocomplete & type checking
# Any function becomes a tool agent = rt.agent_node( "Assistant", tool_nodes=(my_tool, api_call) )
- ✅ Instant function-to-tool conversion
- ✅ Seamless API/database integration
- ✅ MCP protocol support
# Smart parallelization built-in # with interface similar to asyncio result = await rt.call(agent, query)
- ✅ Easy to learn standardized interface
- ✅ Built-in validation, error handling & retries
- ✅ Auto-parallelization management
railtracks viz # See everything- ✅ Real-time execution visualization
- ✅ Complete execution history
- ✅ Debug like regular Python code
📦 Installation
pip install railtracks railtracks-cli
⚡ Your First Agent in 5 Min
import railtracks as rt # 1. Create tools (just functions with decorators!) @rt.function_node def count_characters(text: str, character: str) -> int: """Count occurrences of a character in text.""" return text.count(character) @rt.function_node def word_count(text: str) -> int: """Count words in text.""" return len(text.split()) # 2. Build an agent with tools text_analyzer = rt.agent_node( "Text Analyzer", tool_nodes=(count_characters, word_count), llm=rt.llm.OpenAILLM("gpt-4o"), system_message="You analyze text using the available tools." ) # 3. Use it to solve the classic "How many r's in strawberry?" problem @rt.session async def main(): result = await rt.call(text_analyzer, "How many 'r's are in 'strawberry'?") print(result.text) # "There are 3 'r's in 'strawberry'!" # Run it import asyncio asyncio.run(main())
📊 Visualize Agent in 5 second
railtracks init # Setup visualization (one-time) railtracks viz # See your agent in action
Railtracks Visualizer
🔍 See every step of your agent's execution in real-time
🔍 Multi-Agent Research System
# Research coordinator that uses specialized agents researcher = rt.agent_node("Researcher", tool_nodes=(web_search, summarize)) analyst = rt.agent_node("Analyst", tool_nodes=(analyze_data, create_charts)) writer = rt.agent_node("Writer", tool_nodes=(draft_report, format_document)) coordinator = rt.agent_node( "Research Coordinator", tool_nodes=(researcher, analyst, writer), # Agents as tools! system_message="Coordinate research tasks between specialists." )
🔄 Complex Workflows Made Simple
# Customer service system with context sharing async def handle_customer_request(query: str): with rt.Session() as session: # Technical support first technical_result = await rt.call(technical_agent, query) # Share context with billing if needed if "billing" in technical_result.text.lower(): session.context["technical_notes"] = technical_result.text billing_result = await rt.call(billing_agent, query) return billing_result return technical_result
A lightweight agentic LLM framework for building modular, multi-LLM workflows with a focus on simplicity and developer experience.
| Feature | Railtracks | LangGraph | Google ADK |
|---|---|---|---|
| 🐍 Python-first, no DSL | ✅ | ❌ | ✅ |
| 📊 Built-in visualization | ✅ | ✅ | |
| ⚡ Zero setup overhead | ✅ | ✅ | ❌ |
| 🔄 LLM-agnostic | ✅ | ✅ | ✅ |
| 🎯 Pythonic style | ✅ | ❌ |
Switch between providers effortlessly:
# OpenAI rt.llm.OpenAILLM("gpt-4o") # Anthropic rt.llm.AnthropicLLM("claude-3-5-sonnet") # Local models rt.llm.OllamaLLM("llama3")
Works with OpenAI, Anthropic, Google, Azure, and more! Check out our neatly crafted docs.
Use existing tools or create your own:
- ✅ Built in Tools RAG, CoT, etc.
- ✅ Functions → Tools automatically
- ✅ MCP Integration as client or as server
- ✅ Agents as Tools → agent cluster
Debug and monitor with ease:
- ✅ Real-time execution graphs
- ✅ Performance metrics
- ✅ Error tracking & debugging
- ✅ Local visualization
- ✅ Session management
- ✅ No signup required!
Documentation
Complete guides & API reference Quickstart
Quickstart
Up and running in 5 minutes Examples
Examples
Real-world implementations Discord
Discord
Get help & share creations Contributing
Contributing
Help make us better
pip install railtracks railtracks-cli
Star this repo
You grow, we grow - Railtracks will expand with your ambitions.
Made with lots of ❤️ and ☕ by the ◊Railtracks◊ team • Licensed under MIT • Report Bug • Request Feature