A production-ready MLflow Tracking Server running on Docker with MySQL backend and S3-compatible artifact storage (RustFS).
┌─────────────────┐ ┌─────────────────┐
│ nginx-proxy │────▶│ MLflow │
│ (port 15000) │ └────────┬────────┘
└─────────────────┘ │
│
┌────────────┴────────────┐
│ │
┌───────────┐ ┌───────────────┐
│ MySQL │ │ RustFS │
│ (Backend) │ │ (Artifacts) │
└───────────┘ └───────────────┘
- MLflow Tracking Server - Experiment tracking and model registry
- MySQL - Persistent backend store for metadata
- RustFS - S3-compatible object storage for artifacts
- nginx-proxy - Reverse proxy with Basic Authentication support
cp env.template .env
Edit .env file:
# Hostname(s) for the MLflow server (comma-separated for multiple) VIRTUAL_HOST=localhost VIRTUAL_HOST=example.com,localhost # Optional: Specify MLflow version (leave empty for latest) MLFLOW_VERSION=
To enable Basic Authentication, create/edit the htpasswd file:
# Install htpasswd if not available # macOS: brew install httpd # Ubuntu: apt-get install apache2-utils # Create password file (replace 'username' and enter password when prompted) htpasswd -c ./nginx/htpasswd/localhost username # If you have multiple hosts, copy the file for each host cp ./nginx/htpasswd/localhost ./nginx/htpasswd/example.com
docker compose up -d
The MLflow UI will be available at: http://localhost:15000
docker compose down
import mlflow # Set tracking URI mlflow.set_tracking_uri("http://localhost:15000") # If Basic Authentication is enabled import os os.environ["MLFLOW_TRACKING_USERNAME"] = "username" os.environ["MLFLOW_TRACKING_PASSWORD"] = "password" # Start logging mlflow.set_experiment("my-experiment") with mlflow.start_run(): mlflow.log_param("learning_rate", 0.01) mlflow.log_metric("accuracy", 0.95) mlflow.log_artifact("model.pkl")
export MLFLOW_TRACKING_URI="http://your-server:15000" export MLFLOW_TRACKING_USERNAME="username" export MLFLOW_TRACKING_PASSWORD="password"
Data is persisted in the following directories:
./mysql/data/- MySQL database files./rustfs/- Artifact storage
# Backup docker compose exec db sh -c 'mysqldump -uroot -p"$MYSQL_ROOT_PASSWORD" "$MYSQL_DATABASE"' > "$(date +%Y%m%d_%H%M%S)_mlflow.sql" # Restore (replace the filename with your backup file) docker compose exec -T db sh -c 'mysql -uroot -p"$MYSQL_ROOT_PASSWORD" "$MYSQL_DATABASE"' < 20251213_143052_mlflow.sql
- Setup access source and destination S3-compatible storage.
- Run MinIO Client container with host.docker.internal mapping:
docker run --rm -it \ --add-host=host.docker.internal:host-gateway \ --entrypoint sh \ minio/mc
- Inside the container, configure source and destination:
# Source mc alias set src http://host.docker.internal:9000 <ACCESS_KEY> <SECRET_KEY> # Destination mc alias set dst http://host.docker.internal:9001 <ACCESS_KEY> <SECRET_KEY>
- Copy the bucket:
mc mirror src/mlflow/artifacts dst/mlflow/artifacts
docker compose up -d ./mlflow-gc.sh