Optimizing Traefik for Multi-Host Self-Hosting
Running multiple services across Docker hosts requires a robust reverse proxy. Traefik excels here with its dynamic configuration, automatic TLS via Let's Encrypt, and native Docker integration.
The Setup
My home lab runs across three machines:
- Host A — Core services (Nextcloud, Gitea, monitoring)
- Host B — Media and automation (Jellyfin, Home Assistant)
- Host C — Development and CI/CD (GitLab Runner, staging environments)
Each host runs Docker with Traefik configured as a reverse proxy.
Key Configuration Decisions
1. Centralized vs. Distributed Traefik
I chose a hybrid approach: one primary Traefik instance handles external traffic and TLS termination, while secondary instances on each host handle internal routing.
# docker-compose.traefik.yml
services:
traefik:
image: traefik:v3.0
command:
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web"
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./acme.json:/acme.json
2. Dynamic Configuration with File Provider
For services on remote hosts, I use Traefik's file provider with a shared config directory synced via rsync.
3. Middleware Chains
Standardized middleware chains for common patterns:
http:
middlewares:
secure-headers:
headers:
stsSeconds: 31536000
stsIncludeSubdomains: true
contentTypeNosniff: true
browserXssFilter: true
rate-limit:
rateLimit:
average: 100
burst: 50
Performance Tips
- Enable HTTP/3 — Traefik v3 supports it natively
- Use Redis for distributed rate limiting across hosts
- Tune connection pooling for upstream services
- Monitor with Prometheus — Traefik exports metrics out of the box
Results
After optimization, my setup handles 50+ services across three hosts with sub-100ms routing overhead and zero-downtime deployments via Docker rolling updates.