A high-performance, Redis-backed distributed rate limiting service written in Rust.
Features • Quick Start • API • Docs • Contributing
Crates.io MIT License GitHub Stars
Rust 1.70+ Redis 7.0+ Tokio Async Production Ready
"Our API got slammed at 3 AM. Again. By the time we noticed, the database was toast."
Throttler protects your APIs from abuse with distributed rate limiting that scales horizontally. Built in Rust for maximum performance, it uses Redis for shared state across multiple instances.
| Problem | Throttler Solution |
|---|---|
| Traffic spikes crashing services | Token bucket algorithm absorbs bursts |
| Single-instance rate limiting doesn't scale | Redis-backed state works across instances |
| Complex rate limiting logic in every service | Centralized REST API manages all limits |
| Inconsistent rate limit headers | Standard X-RateLimit-* headers everywhere |
- Token Bucket — Smooth rate limiting with burst support
- Sliding Window — Precise request counting
- Atomic Operations — Thread-safe with Lua scripts
- Auto Refill — Time-based token replenishment
- RESTful API — Full CRUD for rate configurations
- Standard Headers —
X-RateLimit-*,Retry-After - Health Checks —
/healthand/readyendpoints - Graceful Shutdown — Completes in-flight requests
- Async/Await — Built on Tokio runtime
- Connection Pooling — Efficient Redis connections
- Zero-Copy — Minimal allocations in hot path
- Sub-millisecond — Typical response times
- Prometheus Metrics — Request rates, latencies
- Structured Logging — JSON with configurable levels
- Redis Commander — Visual inspection UI
- Tracing — Distributed request tracing
- Rust 1.70+
- Docker (for Redis)
# Clone the repository git clone https://github.com/psenger/throttler.git cd throttler # Copy environment configuration cp .env.example .env # Start Redis docker compose up -d # Build and run cargo build --release cargo run --release
Server starts at http://localhost:8080
# 1. Create a rate limit: 100 requests per 60-second window curl -X POST http://localhost:8080/rate-limit/my-api-key \ -H "Content-Type: application/json" \ -d '{"requests": 100, "window_ms": 60000}' # 2. Check the rate limit (consumes 1 token) curl -X POST http://localhost:8080/rate-limit/my-api-key/check # 3. View current status curl http://localhost:8080/rate-limit/my-api-key
┌─────────────────┐ ┌──────────────────────────────────┐ ┌─────────────────┐
│ Your APIs │────▶│ Throttler Service │────▶│ Redis │
│ & Services │◀────│ (Axum) │◀────│ (State Store) │
└─────────────────┘ └──────────────────────────────────┘ └─────────────────┘
│
┌──────────────┼──────────────┐
▼ ▼ ▼
┌─────────┐ ┌──────────┐ ┌──────────┐
│ Token │ │ Sliding │ │ Metrics │
│ Bucket │ │ Window │ │ & Health │
└─────────┘ └──────────┘ └──────────┘
| Component | Technology | Purpose |
|---|---|---|
| HTTP Server | Axum | High-performance async web framework |
| Runtime | Tokio | Async task scheduling and I/O |
| State Store | Redis | Distributed rate limit state |
| Serialization | Serde | JSON request/response handling |
| Validation | Validator | Request input validation |
| Method | Endpoint | Description |
|---|---|---|
GET |
/health |
Liveness probe |
GET |
/ready |
Readiness probe (checks Redis) |
GET |
/rate-limit/:key |
Get rate limit status |
POST |
/rate-limit/:key |
Create/update rate limit |
DELETE |
/rate-limit/:key |
Delete rate limit |
POST |
/rate-limit/:key/check |
Check and consume tokens |
Request:
curl -X POST http://localhost:8080/rate-limit/user-123/check \ -H "Content-Type: application/json" \ -d '{"tokens": 1}'
Response (200 OK):
{
"allowed": true,
"remaining": 99,
"limit": 100
}Response Headers:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 99
Rate Limited Response (429 Too Many Requests):
{
"allowed": false,
"remaining": 0,
"limit": 100
}Rate Limited Headers:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
Retry-After: 60
See API Documentation for complete reference.
| Variable | Default | Description |
|---|---|---|
BIND_ADDRESS |
127.0.0.1:8080 |
Server bind address |
REDIS_URL |
redis://127.0.0.1:6379 |
Redis connection URL |
DEFAULT_CAPACITY |
100 |
Default bucket capacity |
DEFAULT_REFILL_RATE |
10 |
Default tokens per second |
RUST_LOG |
info |
Log level (error/warn/info/debug/trace) |
The included docker-compose.yml provides:
- Redis on
localhost:6379 - Redis Commander at
http://localhost:8081(visual inspection)
# Start services docker compose up -d # View logs docker compose logs -f # Stop and cleanup docker compose down -v
# Free tier: 100 requests per hour (3600000 ms) curl -X POST http://localhost:8080/rate-limit/free:client-123 \ -H "Content-Type: application/json" \ -d '{"requests": 100, "window_ms": 3600000}' # Pro tier: 10,000 requests per hour curl -X POST http://localhost:8080/rate-limit/pro:client-456 \ -H "Content-Type: application/json" \ -d '{"requests": 10000, "window_ms": 3600000}'
# Limit expensive operations: 5 requests per hour curl -X POST http://localhost:8080/rate-limit/reports:export \ -H "Content-Type: application/json" \ -d '{"requests": 5, "window_ms": 3600000}'
// Dynamically set limits based on subscription tier const limits = { free: { requests: 100, window_ms: 3600000 }, // 100/hour pro: { requests: 10000, window_ms: 3600000 }, // 10k/hour enterprise: { requests: 100000, window_ms: 3600000 } // 100k/hour }; await fetch(`/rate-limit/${tier}:${customerId}`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(limits[tier]) });
| Document | Description |
|---|---|
| API Reference | Complete endpoint documentation |
| Architecture | System design and components |
| Code Architecture | Source code structure and data flow |
| Deployment | Docker, Kubernetes, production setup |
| Examples | Integration examples (Python, Node.js) |
| Monitoring | Prometheus, Grafana, alerting |
| Troubleshooting | Common issues and solutions |
| Changelog | Version history and releases |
# Run tests cargo test # Run with debug logging RUST_LOG=debug cargo run # Format code cargo fmt # Lint code cargo clippy # Run specific test cargo test test_token_bucket -- --nocapture
- Token bucket rate limiting
- Sliding window rate limiting
- Redis-backed distributed state
- RESTful API
- Health/readiness endpoints
- Prometheus metrics
- Docker Compose setup
- Comprehensive test suite
- Fixed window algorithm
- Grafana dashboard templates
- Helm charts
- Circuit breaker pattern
- Request queuing
- WebSocket notifications
Contributions are welcome! Please read our Contributing Guide before submitting changes.
# Fork and clone git clone https://github.com/YOUR_USERNAME/throttler.git # Create a branch git checkout -b feature/amazing-feature # Make changes and test cargo test # Submit a pull request
If you discover a security vulnerability, please email the maintainer directly instead of opening a public issue. See CONTRIBUTING.md for details.
This project is licensed under the MIT License - see the LICENSE file for details.
Philip A Senger
- GitHub: @psenger
- Repository: github.com/psenger/throttler
Built with Rust for teams who take API reliability seriously.