Skip to content

🤝 This guide was built collaboratively by Ozzy (Claude --- architecture, hardening, PowerShell steps) and Mercy (ChatGPT --- full stack design, Caddy/Postgres/Portainer integration, backup automation). It represents the official Atlantis ITS deployment standard for the Hetzner VPS.



Overview

Running n8n on Docker Desktop on a local Windows machine creates a single point of failure --- if the machine reboots, sleeps, or crashes, every client-facing automation goes down with it. This guide migrates the Atlantis automation stack to a Hetzner Cloud VPS, replacing the fragile local setup with a production-grade, always-on server running Docker + Caddy + PostgreSQL + n8n + NTFY + Portainer behind Cloudflare Access.

When complete, Atlantis will have:

  • n8n running 24/7 independent of any local machine

  • Automatic HTTPS via Caddy (no manual cert management)

  • PostgreSQL as n8n\'s database (production-grade vs SQLite)

  • NTFY self-hosted notifications replacing Telegram

  • Portainer for Docker container management via browser

  • Cloudflare Access protecting n8n and Portainer with identity policies

  • Nightly automated backups with 14-day retention

The Stack


Component Version Port Purpose


Ubuntu 24.04 LTS --- OS --- Docker-optimized, supported through 2029

Docker + Latest --- Container runtime Compose

Caddy 2 80 / 443 Reverse proxy --- automatic HTTPS, zero config

PostgreSQL 16 5432 (internal) n8n database --- production-grade, replaces SQLite

n8n Latest 5678 (internal) Automation engine --- Roofing Lead Engine + all workflows

NTFY Latest 80 (internal) Self-hosted push notifications --- replacing Telegram

Portainer CE Latest 9000 (internal) Docker management UI

UFW Built-in --- Host firewall --- only 22/80/443 exposed externally

Cloudflare Zero Trust --- Identity-based access control Access for n8n and Portainer



💡 All services run in Docker containers on a single Hetzner CPX21 (\~\$10.50/mo). Only ports 22, 80, and 443 are exposed externally. Everything else communicates internally on the Docker network.



Deployment Order Overview

Follow these steps in sequence. Do not skip ahead.


Phase Steps


Phase 1 --- Provision Steps 1--3: Create VPS, point DNS, SSH in

Phase 2 --- Harden Steps 4--5: Secure the server before touching Docker

Phase 3 --- Deploy Steps 6--10: Install Docker, build stack, start services

Phase 4 --- Validate Steps 11--12: Test each service before going live

Phase 5 --- Protect Steps 13--14: Enable Cloudflare proxy + Access policies

Phase 6 --- Automate Steps 15--16: Backup script + nightly cron

Phase 7 --- Migrate Step 17: Move production workflows from local n8n


Phase 1 --- Provision

Step 1 --- Create the Hetzner Account & Server

  1. Go to: https://www.hetzner.com/cloud → Register

  2. Verify email and add payment method (credit card or PayPal --- hourly billing)

  3. Hetzner may request identity verification for new accounts --- normal, clears within hours

  4. Create a new Project named: atlantis-vps

Server Configuration


Setting Value


Location Ashburn, VA (US East) --- \~20-40ms from Gainesville GA

Image Ubuntu 24.04 LTS

Type CPX21 --- 3 vCPU, 4GB RAM, 80GB NVMe (\~\$10.50/mo)

SSH Key Add your public key (see Step 3 for generation)

Server Name atlantis-n8n-prod

Password login Disable if offered --- SSH key only


{width="4.97341426071741in" height="3.4473840769903763in"}

  1. Click Create & Buy Now --- server provisions in \~30 seconds

  2. Note the public IPv4 address --- you will use it throughout this guide


💡 Hourly billing means you can spin up, test, and destroy a server for under \$0.01. Use a throwaway instance to practice Steps 4-10 before touching production.




📸 SCREENSHOT PLACEHOLDER --- Hetzner Console --- atlantis-n8n-prod server showing IPv4 address and Running status



Step 2 --- Add DNS Records in Cloudflare

Create three A records in Cloudflare DNS for your atlantisits.co zone. Set all three to DNS Only (Grey Cloud) for now --- you will switch to proxied after validating the stack in Step 13.


Subdomain Type Value


n8n.atlantisits.co A [Hetzner server IPv4]

ntfy.atlantisits.co A [Hetzner server IPv4]

portainer.atlantisits.co A [Hetzner server IPv4]



⚠️ Keep all three records DNS Only (Grey Cloud) for now. Switching to proxied before the stack is validated can mask connection errors and make debugging harder. Flip to proxied in Step 13 only after all services are loading.




📸 SCREENSHOT PLACEHOLDER --- Cloudflare DNS panel --- three A records pointing to Hetzner IP, all DNS Only



Step 3 --- Generate SSH Key & Connect

3.1 Generate SSH Key (Windows PowerShell)

If you already have an SSH key from GitHub, skip to 3.2 and reuse it.

+-----------------------------------------------------------------------+ | # Run in PowerShell on your Windows machine | | | | ssh-keygen -t ed25519 -C \'atlantis-hetzner\' | | | | # Accept default path: C:\Users\Shane\.ssh\id_ed25519 | | | | # Set a passphrase (recommended) or press Enter for none | | | | # Copy the public key to clipboard | | | | Get-Content \$env:USERPROFILE\.ssh\id_ed25519.pub | Set-Clipboard | +=======================================================================+

Paste the copied public key into Hetzner Console → Security → SSH Keys → Add SSH Key. Name it: shane-atlantis.


⚠️ Never paste your private key (id_ed25519) anywhere. Only the .pub file goes to Hetzner. If asked to paste a key that starts with -----BEGIN OPENSSH PRIVATE KEY----- --- stop, that is the wrong file.



3.2 Connect to the Server

+-----------------------------------------------------------------------+ | # Replace 1.2.3.4 with your actual Hetzner server IPv4 | | | | ssh root@1.2.3.4 | | | | # First connection --- you will see a fingerprint warning | | | | # Type: yes (this is expected and safe) | | | | # You are now logged in as root | +=======================================================================+


📸 SCREENSHOT PLACEHOLDER --- Terminal --- SSH session connected as root to Hetzner server



Phase 2 --- Harden

Do this before installing anything else. An unprotected Ubuntu server gets probed by bots within minutes of going online. Complete both steps before moving to Docker.

Step 4 --- Lock Down the Firewall

+-----------------------------------------------------------------------+ | # Update the system first | | | | apt update && apt upgrade -y | | | | apt install -y ca-certificates curl gnupg ufw | | | | # Configure UFW --- allow only what is needed | | | | ufw allow OpenSSH | | | | ufw allow 80/tcp | | | | ufw allow 443/tcp | | | | ufw enable | | | | ufw status | +=======================================================================+


⚠️ Always run ufw allow OpenSSH BEFORE ufw enable. Enabling UFW without allowing SSH will lock you out of the server. If this happens, use the Hetzner Console rescue mode.



Expected output after ufw status:

+-----------------------------------------------------------------------+ | Status: active | | | | To Action From | | | | -- ------ ---- | | | | OpenSSH ALLOW Anywhere | | | | 80/tcp ALLOW Anywhere | | | | 443/tcp ALLOW Anywhere | +=======================================================================+

Step 5 --- Create a Non-Root User & Disable Root SSH

+-----------------------------------------------------------------------+ | # Create a non-root user | | | | adduser shane | | | | usermod -aG sudo shane | | | | # Copy SSH key to new user | | | | rsync --archive --chown=shane:shane \~/.ssh /home/shane | | | | # Disable root SSH login and password auth | | | | nano /etc/ssh/sshd_config | | | | # Find and change these two lines: | | | | # PermitRootLogin yes → PermitRootLogin no | | | | # PasswordAuthentication yes → PasswordAuthentication no | | | | # Save and exit: Ctrl+X → Y → Enter | | | | systemctl restart sshd | +=======================================================================+


⚠️ CRITICAL: Before closing your root session --- open a NEW terminal window and test SSH as your new user: ssh shane@[server-ip]. Confirm you can log in and run: sudo whoami (should return root). Only then close the root session. Skipping this test risks permanent lockout.



From this point forward, all SSH connections use: ssh shane@[server-ip]


📸 SCREENSHOT PLACEHOLDER --- Terminal --- SSH session connected as shane, sudo whoami returning root



Phase 3 --- Deploy

Step 6 --- Install Docker

+-----------------------------------------------------------------------+ | # Remove any old Docker versions | | | | apt remove -y docker docker-engine docker.io containerd runc || | | true | | | | # Add Docker\'s official GPG key and repository | | | | install -m 0755 -d /etc/apt/keyrings | | | | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg | | --dearmor -o /etc/apt/keyrings/docker.gpg | | | | chmod a+r /etc/apt/keyrings/docker.gpg | | | | echo \"deb [arch=\$(dpkg --print-architecture) | | signed-by=/etc/apt/keyrings/docker.gpg] | | https://download.docker.com/linux/ubuntu \$(. /etc/os-release && echo | | \$VERSION_CODENAME) stable\" > /etc/apt/sources.list.d/docker.list | | | | # Install Docker CE | | | | apt update | | | | apt install -y docker-ce docker-ce-cli containerd.io | | docker-buildx-plugin docker-compose-plugin | | | | # Enable Docker and verify | | | | systemctl enable docker | | | | systemctl start docker | | | | docker --version | | | | docker compose version | | | | # Add shane to docker group (no sudo needed for docker commands) | | | | usermod -aG docker shane | | | | newgrp docker | +=======================================================================+


📸 SCREENSHOT PLACEHOLDER --- Terminal --- docker --version and docker compose version output confirming install



Step 7 --- Create Folder Structure

+-----------------------------------------------------------------------+ | mkdir -p /opt/atlantis/{caddy,ntfy,backups} | | | | cd /opt/atlantis | | | | ls -la | | | | # Expected: caddy/ ntfy/ backups/ | +=======================================================================+

All Atlantis stack files live under /opt/atlantis. This is your working directory for the rest of this guide.

Step 8 --- Create the .env File


nano /opt/atlantis/.env



Paste the following and replace all placeholder values with strong, unique passwords. Store them in your password manager immediately.

+-----------------------------------------------------------------------+ | # PostgreSQL | | | | POSTGRES_USER=atlantis | | | | POSTGRES_PASSWORD=CHANGE_THIS_TO_A_LONG_RANDOM_PASSWORD | | | | POSTGRES_DB=n8n | | | | # n8n | | | | N8N_HOST=n8n.atlantisits.co | | | | N8N_PROTOCOL=https | | | | WEBHOOK_URL=https://n8n.atlantisits.co/ | | | | GENERIC_TIMEZONE=America/New_York | | | | N8N_SECURE_COOKIE=true | | | | N8N_PROXY_HOPS=1 | +=======================================================================+


💡 N8N_PROXY_HOPS=1 is required when n8n runs behind a reverse proxy (Caddy in this case). Without it, webhook URLs generate incorrectly and the Roofing Lead Engine will break. This is an easy one to miss --- Mercy caught it.



Step 9 --- Create docker-compose.yml


nano /opt/atlantis/docker-compose.yml



Paste the full stack definition:

+-----------------------------------------------------------------------+ | services: | | | | caddy: | | | | image: caddy:2 | | | | container_name: caddy | | | | restart: unless-stopped | | | | ports: | | | | - \"80:80\" | | | | - \"443:443\" | | | | volumes: | | | | - ./caddy/Caddyfile:/etc/caddy/Caddyfile | | | | - caddy_data:/data | | | | - caddy_config:/config | | | | depends_on: | | | | - n8n | | | | - ntfy | | | | - portainer | | | | postgres: | | | | image: postgres:16 | | | | container_name: postgres | | | | restart: unless-stopped | | | | env_file: .env | | | | environment: | | | | POSTGRES_USER: \${POSTGRES_USER} | | | | POSTGRES_PASSWORD: \${POSTGRES_PASSWORD} | | | | POSTGRES_DB: \${POSTGRES_DB} | | | | volumes: | | | | - postgres_data:/var/lib/postgresql/data | | | | n8n: | | | | image: n8nio/n8n:latest | | | | container_name: n8n | | | | restart: unless-stopped | | | | env_file: .env | | | | environment: | | | | DB_TYPE: postgresdb | | | | DB_POSTGRESDB_HOST: postgres | | | | DB_POSTGRESDB_PORT: \'5432\' | | | | DB_POSTGRESDB_DATABASE: \${POSTGRES_DB} | | | | DB_POSTGRESDB_USER: \${POSTGRES_USER} | | | | DB_POSTGRESDB_PASSWORD: \${POSTGRES_PASSWORD} | | | | N8N_HOST: \${N8N_HOST} | | | | N8N_PROTOCOL: \${N8N_PROTOCOL} | | | | WEBHOOK_URL: \${WEBHOOK_URL} | | | | GENERIC_TIMEZONE: \${GENERIC_TIMEZONE} | | | | N8N_SECURE_COOKIE: \${N8N_SECURE_COOKIE} | | | | N8N_PROXY_HOPS: \${N8N_PROXY_HOPS} | | | | volumes: | | | | - n8n_data:/home/node/.n8n | | | | depends_on: | | | | - postgres | | | | ntfy: | | | | image: binwiederhier/ntfy:latest | | | | container_name: ntfy | | | | restart: unless-stopped | | | | command: serve | | | | volumes: | | | | - ./ntfy/server.yml:/etc/ntfy/server.yml | | | | - ntfy_cache:/var/cache/ntfy | | | | environment: | | | | TZ: America/New_York | | | | portainer: | | | | image: portainer/portainer-ce:latest | | | | container_name: portainer | | | | restart: unless-stopped | | | | command: -H unix:///var/run/docker.sock | | | | volumes: | | | | - /var/run/docker.sock:/var/run/docker.sock | | | | - portainer_data:/data | | | | volumes: | | | | caddy_data: | | | | caddy_config: | | | | postgres_data: | | | | n8n_data: | | | | ntfy_cache: | | | | portainer_data: | +=======================================================================+

Step 10 --- Create Caddyfile and NTFY Config

10.1 Caddyfile


nano /opt/atlantis/caddy/Caddyfile



Paste:

+-----------------------------------------------------------------------+ | n8n.atlantisits.co { | | | | reverse_proxy n8n:5678 | | | | } | | | | ntfy.atlantisits.co { | | | | reverse_proxy ntfy:80 | | | | } | | | | portainer.atlantisits.co { | | | | reverse_proxy portainer:9000 | | | | } | +=======================================================================+


💡 Caddy automatically provisions and renews TLS certificates via Let\'s Encrypt. No certbot, no cron jobs, no manual renewal. It just works.



10.2 NTFY Server Config


nano /opt/atlantis/ntfy/server.yml



Paste:

+-----------------------------------------------------------------------+ | base-url: \"https://ntfy.atlantisits.co\" | | | | cache-file: \"/var/cache/ntfy/cache.db\" | | | | behind-proxy: true | | | | listen-http: \":80\" | +=======================================================================+

Phase 4 --- Validate

Step 11 --- Start the Stack

+-----------------------------------------------------------------------+ | cd /opt/atlantis | | | | docker compose pull | | | | docker compose up -d | | | | # Check all containers are running | | | | docker compose ps | | | | # Watch logs for errors | | | | docker compose logs -f | | | | # Press Ctrl+C to stop following logs | +=======================================================================+

Expected output from docker compose ps --- all containers should show Status: running:

+-----------------------------------------------------------------------+ | NAME IMAGE STATUS | | | | caddy caddy:2 running | | | | postgres postgres:16 running | | | | n8n n8nio/n8n:latest running | | | | ntfy binwiederhier/ntfy:latest running | | | | portainer portainer/portainer-ce:latest running | +=======================================================================+


📸 SCREENSHOT PLACEHOLDER --- Terminal --- docker compose ps showing all 5 containers running



Step 12 --- Test Each Service

Open a browser and visit each URL. DNS must have propagated for these to resolve --- allow up to 5 minutes after creating the records.


URL Expected Result First-Run Action


https://n8n.atlantisits.co n8n setup wizard Create owner account --- save credentials

https://ntfy.atlantisits.co NTFY landing page No action needed --- test push in Step 17

https://portainer.atlantisits.co Portainer setup Create admin account --- save credentials



⚠️ If any service is unreachable, check logs first: docker compose logs [service-name] --tail=50. Common cause: DNS not yet propagated. Wait 2 minutes and retry before assuming a config error.




📸 SCREENSHOT PLACEHOLDER --- Browser --- n8n owner account creation screen loading at https://n8n.atlantisits.co




📸 SCREENSHOT PLACEHOLDER --- Browser --- Portainer admin setup screen loading at https://portainer.atlantisits.co



Phase 5 --- Protect

Step 13 --- Enable Cloudflare Proxy

Now that all services are validated and loading, switch the DNS records to proxied. This routes traffic through Cloudflare\'s network for DDoS protection, WAF, and Access policies.

In Cloudflare DNS panel --- click the grey cloud icon on each record to turn it orange (proxied):

  • n8n.atlantisits.co → Proxied

  • portainer.atlantisits.co → Proxied

  • ntfy.atlantisits.co → Proxied (optional --- proxied if you want public access, DNS Only if keeping internal)


💡 After switching to proxied, Cloudflare handles HTTPS termination. Caddy will still provision its own cert initially --- this is fine. The Cloudflare proxy sits in front of Caddy and everything still works.




📸 SCREENSHOT PLACEHOLDER --- Cloudflare DNS --- n8n and portainer records showing orange proxied cloud icons



Step 14 --- Protect n8n and Portainer with Cloudflare Access

Cloudflare Access adds identity-based authentication in front of your services --- no one can reach n8n or Portainer without first authenticating through Cloudflare, even if they know the URL.

  1. In Cloudflare Dashboard → Zero Trust → Access → Applications → Add an Application

  2. Select: Self-hosted

For n8n:


Field Value


Application name Atlantis n8n

Application domain n8n.atlantisits.co

Policy name Atlantis Team Only

Rule type Emails

Value Srhardin@gmail.com (add Maranda\'s email too if needed)

Session duration 24 hours


Repeat for Portainer:


Field Value


Application name Atlantis Portainer

Application domain portainer.atlantisits.co

Policy Reuse Atlantis Team Only policy



💡 Leave ntfy without Access protection for now if you want n8n to push notifications to it without auth headers. If you restrict ntfy with Access, n8n\'s HTTP Request node will need to pass the Cloudflare Access service token --- add that complexity only when needed.




✅ After saving, visiting n8n.atlantisits.co or portainer.atlantisits.co will redirect to a Cloudflare login page requiring your email. This is your Atlantis security stack (Cloudflare + IPVanish + Avast Premium) now fully protecting the server layer.




📸 SCREENSHOT PLACEHOLDER --- Browser --- Cloudflare Access login screen appearing before n8n



Phase 6 --- Automate Backups

Step 15 --- Create Backup Script

This script backs up PostgreSQL, all Docker volumes, and all config files. Backups are stored locally on the VPS with 14-day retention. For offsite backup, copy the /opt/atlantis/backups folder to your NAS or D: drive periodically (future automation --- SOP candidate).


nano /opt/atlantis/backups/backup.sh



Paste:

+-----------------------------------------------------------------------+ | #!/bin/bash | | | | set -e | | | | STAMP=\$(date +%F-%H%M) | | | | BACKUP_DIR=\"/opt/atlantis/backups/\$STAMP\" | | | | mkdir -p \"\$BACKUP_DIR\" | | | | # Dump PostgreSQL (n8n database) | | | | docker exec postgres pg_dump -U atlantis n8n > | | \"\$BACKUP_DIR/n8n-postgres.sql\" | | | | # Backup Docker volumes | | | | docker run --rm -v n8n_data:/data -v \"\$BACKUP_DIR\":/backup alpine | | tar czf /backup/n8n_data.tar.gz -C /data . | | | | docker run --rm -v portainer_data:/data -v \"\$BACKUP_DIR\":/backup | | alpine tar czf /backup/portainer_data.tar.gz -C /data . | | | | # Backup config files | | | | cp /opt/atlantis/docker-compose.yml \"\$BACKUP_DIR/\" | | | | cp /opt/atlantis/.env \"\$BACKUP_DIR/\" | | | | cp /opt/atlantis/caddy/Caddyfile \"\$BACKUP_DIR/\" | | | | cp /opt/atlantis/ntfy/server.yml \"\$BACKUP_DIR/\" | | | | # Clean up backups older than 14 days | | | | find /opt/atlantis/backups -maxdepth 1 -type d -mtime +14 -exec rm | | -rf {} \; | | | | echo \"Backup complete: \$BACKUP_DIR\" | +=======================================================================+

+-----------------------------------------------------------------------+ | # Make executable and run a test backup | | | | chmod +x /opt/atlantis/backups/backup.sh | | | | /opt/atlantis/backups/backup.sh | | | | # Verify output | | | | ls -lah /opt/atlantis/backups/ | +=======================================================================+


✅ A successful test backup creates a timestamped folder containing: n8n-postgres.sql, n8n_data.tar.gz, portainer_data.tar.gz, docker-compose.yml, .env, Caddyfile, server.yml



Step 16 --- Schedule Nightly Backups

+-----------------------------------------------------------------------+ | # Edit root crontab | | | | crontab -e | | | | # Select nano if prompted | | | | # Add this line at the bottom: | | | | 15 2 * * * /opt/atlantis/backups/backup.sh >> | | /var/log/atlantis-backup.log 2>&1 | | | | # Save and exit: Ctrl+X → Y → Enter | | | | # Verify cron entry | | | | crontab -l | +=======================================================================+


💡 Backup runs at 2:15 AM Eastern daily. Check the log any morning with: tail -50 /var/log/atlantis-backup.log. Future improvement: add NTFY alert on backup completion or failure via a curl call at the end of backup.sh.



Phase 7 --- Migrate Production Workflows


⚠️ Do NOT migrate production workflows until the VPS stack is fully validated and backups are running. Create and test a dummy workflow on the VPS first. The Roofing Lead Engine is client-facing --- treat migration as a production cutover.



Step 17 --- Migrate n8n Workflows from Local Machine

17.1 Export from Local n8n

  1. Open local n8n: http://localhost:5678

  2. Settings → Export → Download all workflows as JSON

  3. Save file to: D:\Data\AtlantisITS\n8n\workflow-export-[date].json

17.2 Import to Hetzner n8n

  1. Open Hetzner n8n: https://n8n.atlantisits.co (through Cloudflare Access)

  2. Settings → Import → Upload the exported JSON file

  3. Re-enter ALL credentials --- API keys, webhook secrets, Google Sheets OAuth, Twilio, Discord


⚠️ Credentials are intentionally excluded from n8n exports for security. Every API key and secret must be re-entered manually. Budget 30-60 minutes for this step.



17.3 Update Vercel Environment Variables

+-----------------------------------------------------------------------+ | # In Vercel Dashboard for ai.atlantisits.co project: | | | | # Settings → Environment Variables | | | | # | | | | # Update: | | | | # N8N_WEBHOOK_URL = https://n8n.atlantisits.co/webhook/roofing-lead | | | | # ATLANTIS_WEBHOOK_SECRET = [unchanged --- same secret] | | | | # | | | | # Redeploy the project after updating env vars | +=======================================================================+

17.4 End-to-End Test

  1. Submit a test lead on ai.atlantisits.co

  2. Verify webhook fires and reaches Hetzner n8n (check n8n execution log)

  3. Verify all 3 notification channels fire: Discord ✅, NTFY ✅, Twilio ✅

  4. Verify Google Sheets CRM entry is created

  5. Once confirmed --- stop and remove local n8n Docker stack to free resources


✅ Migration complete. The Roofing Lead Engine now runs 24/7 on Hetzner independent of your local machine. The Atlantis SaaS platform is now production-ready to pitch to contractors.



Troubleshooting


Issue Likely Cause Fix Prevention


Can\'t SSH after Root login Hetzner Console → Always test new user Step 5 disabled before Rescue mode → SSH before closing testing new user re-enable root root session

UFW locked out ufw enable Hetzner Console → Always allow OpenSSH SSH before ufw allow Rescue mode first OpenSSH

Container not Port conflict or docker compose logs Validate .env file starting bad env var [name] --tail=50 before first up

n8n webhook URL N8N_PROXY_HOPS Set N8N_PROXY_HOPS=1 Always set when behind wrong missing or 0 in .env → restart reverse proxy

Caddy cert fails DNS not proxied Wait 5 min, check DNS Validate DNS before or not Only first, then flip starting stack propagated

Portainer socket path Verify docker.sock Use exact path: unreachable wrong volume in compose /var/run/docker.sock file

NTFY not behind-proxy Add behind-proxy: Always set when behind receiving missing in true → restart ntfy Caddy/NGINX server.yml

Workflow missing Credentials Re-enter all API keys Budget 30-60 min for after import excluded by manually in n8n credential re-entry design

Cloudflare Access policy Exclude webhook Test webhook endpoint Access blocks too broad subdomain from Access before enabling Access n8n API policy


Quick Reference


Item Value


Hetzner plan CPX22 --- 3 vCPU, 4GB RAM, 80GB NVMe

Location Ashburn, VA (US East)

OS Ubuntu 24.04 LTS

Monthly cost \~\$10.50 USD

Working directory (VPS) /opt/atlantis

Working directory D:\Data\AtlantisITS (Windows)

n8n URL https://n8n.atlantisits.co

NTFY URL https://ntfy.atlantisits.co

Portainer URL https://portainer.atlantisits.co

Backup log /var/log/atlantis-backup.log

Backup location /opt/atlantis/backups/[timestamp]/

Hetzner Console https://console.hetzner.cloud

Cloudflare Zero Trust https://one.dash.cloudflare.com

Check all containers docker compose ps

View logs docker compose logs -f [name]

Restart stack docker compose down && docker compose up -d

Run manual backup /opt/atlantis/backups/backup.sh


Atlantis Infrastructure Impact


Before Hetzner VPS After Hetzner VPS


n8n on local Docker Desktop (SPOF) ✅ n8n on Hetzner --- 24/7, no local dependency

Lead Engine down when PC reboots ✅ Lead Engine survives PC off/sleep/updates

SQLite database (fragile) ✅ PostgreSQL 16 --- production-grade persistence

Cloudflare Tunnel required for ✅ Direct A record --- simpler, webhooks faster, more reliable

Telegram for notifications ✅ Self-hosted NTFY --- no third-party dependency

No container management UI ✅ Portainer CE --- browser-based Docker management

No automated backups ✅ Nightly backup --- Postgres + volumes + configs

Not sellable as SaaS platform ✅ Production-ready --- pitch contractors with confidence

Security gap at automation layer ✅ Cloudflare Access --- identity-gated n8n + Portainer

\~\$0/mo infrastructure \~\$6--8/mo --- cheapest production-grade option available