April 2026
6 min read
Share article

How to Deploy n8n on Docker in Production (2026 Guide)

Deploy n8n on Docker in production

A production n8n deployment is more than docker run. You need a real database, HTTPS, persistent volumes, backups, health checks, and a plan for upgrades. The default docker run command from the quickstart is fine for testing but will lose your workflows the first time you restart the container improperly. This guide walks through the production-grade setup.

What Production-Grade Actually Means

Five non-negotiables. First, workflows and credentials stored in Postgres (not SQLite). Second, HTTPS enforced via reverse proxy. Third, persistent volumes for credentials encryption keys and execution data. Fourth, automated backups of the Postgres database and credentials key. Fifth, health checks and restart-on-failure configuration.

The docker-compose.yml Layout

Three services minimum: n8n, postgres, and a reverse proxy (Caddy or Traefik are easiest for automatic HTTPS). The n8n service connects to postgres via the internal Docker network. The reverse proxy terminates HTTPS and forwards to n8n on port 5678.

Key environment variables: DB_TYPE=postgresdb, DB_POSTGRESDB_HOST, DB_POSTGRESDB_DATABASE, DB_POSTGRESDB_USER, DB_POSTGRESDB_PASSWORD. N8N_ENCRYPTION_KEY (a stable random string; changing it breaks all saved credentials). WEBHOOK_URL set to your public HTTPS URL. N8N_HOST and N8N_PROTOCOL matching your domain setup.

The Encryption Key Matters

N8N_ENCRYPTION_KEY encrypts credentials in the database. If you lose this key, every saved credential becomes unreadable and must be re-entered. If you rotate it by accident (a new deployment without the env var), same problem. Treat this like a database password: generate it once, back it up securely, never regenerate.

HTTPS via Reverse Proxy

Caddy is the path of least resistance. A minimal Caddyfile points your domain at the n8n container and Caddy handles cert issuance via Let's Encrypt automatically. Traefik offers more flexibility with Docker label-based routing if you run multiple services on the same host. Nginx requires manually wiring Certbot.

Whichever you choose, HTTPS must terminate at the proxy. Do not expose n8n directly on port 5678 to the internet. Do not use self-signed certs in production (browser warnings will scare users off).

Reverse Proxy Options Ranked

Caddy (simplest, auto-HTTPS)95/100
Traefik (flexible, auto-HTTPS)88/100
Nginx + Certbot (manual cert)72/100
Cloudflare tunnel (bypass proxy entirely)82/100

Persistent Volumes

Three volumes matter. Postgres data volume (database contents). n8n's /home/node/.n8n volume (local encryption keys and config). Reverse proxy volume for cert storage. Without persistent volumes, every container restart wipes your data.

Name your volumes explicitly in docker-compose.yml rather than using anonymous volumes, so you can find them later for backups and restores.

Backups

Daily automated Postgres dumps are the baseline. pg_dump the n8n database to a timestamped file, upload to S3 or another off-host location, retain for at least 30 days. Backup the N8N_ENCRYPTION_KEY separately in a password manager or secret store. Losing the encryption key is as bad as losing the database.

Test restores monthly. An untested backup is a hope, not a backup. Spin up a test n8n on a staging box, restore the dump, verify workflows load correctly.

Queue Mode for Scale

Default n8n runs workflows in-process. For production above a few thousand executions per day, move to queue mode. This splits work across a main instance and one or more worker instances, with Redis as the queue.

The architecture: main n8n handles the UI and webhooks, pushes executions to Redis. Worker instances pull from Redis and execute. Scale workers horizontally based on load. This lets a 2-VM setup handle tens of thousands of executions per day, and 5+ workers handle hundreds of thousands.

Resource Sizing

Single-instance n8n: 2 vCPU, 4GB RAM handles ~30,000 executions per month depending on workflow complexity. Queue mode with 2 workers: 4 vCPU, 8GB RAM total handles 100,000 to 200,000. Postgres: 2 vCPU, 4GB RAM handles n8n's database load up to millions of executions. Redis: minimal, 512MB is plenty for queue operations.

Upgrades and Version Pinning

Pin the n8n image to a specific version, not latest. Upgrading to a new version should be deliberate: take a backup, read the release notes, update the image tag, pull, restart, verify workflows load. Some minor versions introduce breaking changes in specific nodes. Never upgrade production without first testing in staging.

Monitoring and Alerting

Track: container uptime, execution success rate, webhook response times, database connection pool usage, disk space on volumes. Tools: Uptime Kuma for health checks, Prometheus plus Grafana for metrics, a Slack or email hook for alerts. At minimum, set an alert for when the n8n container restarts unexpectedly.

Production Deployment Checklist

Postgres database (not SQLite)Required
HTTPS via reverse proxyRequired
N8N_ENCRYPTION_KEY backed upRequired
Automated daily Postgres backupsRequired
Monitoring and restart-on-failureRequired

Security Hardening

Enable N8N_BASIC_AUTH_ACTIVE or configure an SSO provider. Restrict the admin UI to known IPs via your reverse proxy if possible. Rotate database passwords periodically. Keep the host OS patched. Never expose Postgres or Redis to the public internet; they should only be reachable from the internal Docker network.

When Kubernetes Makes Sense

For most teams, docker-compose on a single beefy VM is fine. Kubernetes adds complexity without proportional benefit until you are running 10+ n8n workers, need multi-region failover, or have an existing Kubernetes platform you are integrating with. Do not adopt Kubernetes just because n8n has a Helm chart. Use the simplest architecture that meets your reliability needs.

Community & Training

Join 215+ AI Agency Owners

Get free access to our all-in-one outreach platform, AI content templates, and a community of builders landing clients in days.

Access the Free Sprint
22 people joined this week