A high-performance HTTP reverse proxy built with Cloudflare's Pingora framework in Rust.
- Round-Robin Load Balancing - Distributes traffic evenly across backends
- Per-IP Rate Limiting - 10 req/60s per client (configurable)
- Automatic Failover - Retries on different backends when one fails
- Prometheus Metrics - Request latency histograms at
/metrics - Header Injection - Adds X-Proxy and X-Backend headers
- Connection Pooling - Reuses backend connections
- Graceful Shutdown - Handles in-flight requests on SIGTERM
- TOML Configuration - Easy configuration management
# Build cargo build --release # Start all (proxy + test backends) bash scripts/start_all.sh # Test (in another terminal) curl http://localhost:8080/test curl http://localhost:8080/metrics
Edit config/proxy.toml:
[server] listen = "0.0.0.0:8080" workers = 4 [upstreams] backends = ["http://127.0.0.1:8000", "http://127.0.0.1:8001"] [rate_limit] max_requests = 10 window_seconds = 60 key_extractor = "client_ip" [metrics] enabled = true endpoint = "/metrics"
RUST_LOG=info cargo run
RUST_LOG=info ./target/release/pace
# Run automated tests bash scripts/test_proxy.sh # Test load balancing for i in {1..6}; do curl -s http://localhost:8080/ | grep backend done # Test rate limiting (sends 15 requests) for i in {1..15}; do curl -s -o /dev/null -w "%{http_code}\n" http://localhost:8080/ done # View metrics curl http://localhost:8080/metrics # Load test with Apache Bench ab -n 10000 -c 50 http://localhost:8080/
flowchart LR
C[Client] --> P[Proxy:8080]
P --> R[Rate Limiter]
R --> LB[Round Robin LB]
LB --> B1[Backend1:8000]
LB --> B2[Backend2:8001]
- Request Filter - Rate limit check, add X-Proxy header
- Upstream Selection - Round-robin backend selection
- Failover - Retry on connection failure (max 1 retry)
- Response Filter - Add X-Backend header
- Metrics - Record latency histogram
- Logging - Log request details
Create /etc/systemd/system/pace.service:
[Unit] Description=Pingora Reverse Proxy After=network.target [Service] Type=simple User=pace WorkingDirectory=/opt/pace ExecStart=/opt/pace/pace Environment="RUST_LOG=info" Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
Enable and start:
sudo systemctl daemon-reload
sudo systemctl enable pace
sudo systemctl start pace
sudo systemctl status paceAdd to prometheus.yml:
scrape_configs: - job_name: 'pace' scrape_interval: 15s static_configs: - targets: ['localhost:8080'] metrics_path: '/metrics'
upstream pace_backend { server 127.0.0.1:8080; keepalive 32; } server { listen 443 ssl http2; server_name api.example.com; ssl_certificate /path/to/cert.pem; ssl_certificate_key /path/to/key.pem; location / { proxy_pass http://pace_backend; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } location /metrics { proxy_pass http://pace_backend; allow 10.0.0.0/8; deny all; } }
Sliding window algorithm using HashMap<String, Vec<u64>>:
- Tracks request timestamps per IP
- Automatically expires old entries
- Thread-safe with
Arc<RwLock<>>
Lock-free round-robin using AtomicUsize:
let index = self.round_robin_index.fetch_add(1, Ordering::Relaxed); let backend_index = index % self.backends.len();
Per-request state:
struct ProxyContext { backend_index: usize, failure_count: usize, selected_backend: Option<String>, start_time: Instant, }
Prometheus histogram:
http_requests_duration_seconds_bucket{method="GET",status="200",le="0.01"} 23
http_requests_duration_seconds_sum{method="GET",status="200"} 0.523
http_requests_duration_seconds_count{method="GET",status="200"} 25
lsof -i :8080 kill -9 <PID>
ps aux | grep backend
curl http://127.0.0.1:8000
curl http://127.0.0.1:8001# Monitor top -u pace # Profile RUST_LOG=debug ./target/release/pace
Wait 60 seconds for window reset or restart proxy.
Expected on modern hardware:
- Throughput: 10,000+ req/s per worker
- Latency p50: < 10ms
- Latency p99: < 50ms
- Memory: < 50MB under load
Built with Pingora by Cloudflare.