Skip to content

Sakura VPS

Role in My Infrastructure

The Sakura VPS is the public-facing entry point for all my self-hosted services. It runs HAProxy as a reverse proxy and connects to my home server via Tailscale, forwarding traffic through the encrypted VPN tunnel.

VPS Details

  • Provider: Sakura Internet (さくらのVPS)
  • Location: Tokyo, Japan
  • OS: Ubuntu 24.04 LTS
  • Role: Public reverse proxy (HAProxy) + Tailscale VPN gateway

SSH Access

sshd listens on port 28, IPv6 only (:::28). Two reasons:

  • Bot noise reduction: automated scanners almost exclusively hammer port 22 on IPv4. An IPv6-only non-standard port receives virtually no unsolicited connection attempts.
  • Tailscale fallback: if Tailscale goes down and the VPN tunnel is unreachable, direct SSH over IPv6 is still available as a recovery path.

HAProxy Configuration

HAProxy is the only process listening on public-facing ports. All traffic is forwarded through Tailscale to Incus containers on the home server. Full configuration at Benoit/HAProxy.

Ports

Port Protocol Listener Forwards to
22 TCP HAProxy (listen ssh) forgejo.incus:10022 (Forgejo git SSH)
25 TCP HAProxy (listen smtp) Mailcow SMTP
80 TCP HAProxy frontend_default HTTP → HTTPS redirect (except retro.benoit.jp.net)
443 TCP HAProxy frontend_default HTTPS, SSL termination with h2+http/1.1
465 TCP HAProxy (listen smtps) Mailcow SMTPS
993 TCP HAProxy (listen imaps) Mailcow IMAPS
4190 TCP HAProxy (listen sieve) Mailcow ManageSieve

Web Backends

All HTTP/HTTPS traffic enters through a single frontend_default. Routing is done via hdr(host) ACLs:

Hostname Backend Notes
benoit.jp.net www.incus:80 Public
forgejo.benoit.jp.net forgejo.incus:3000 Public
mastodon.benoit.jp.net mastodon2.incus:80 Public, rate-limited
retro.benoit.jp.net retro.incus:80 HTTP-only (no TLS, for old clients)
navidrome.benoit.jp.net navidrome.incus:4533 JP only
photoprism.benoit.jp.net photoprism.incus:2342 JP + FR only
miniflux.benoit.jp.net miniflux.incus:8080 Tailscale / allowed IPs only
kanboard.benoit.jp.net kanboard.incus:80 Tailscale / allowed IPs only
vaultwarden.benoit.jp.net vaultwarden.incus:80 JP + allowed IPs only
beszel.benoit.jp.net beszel.incus:8090 Tailscale / allowed IPs only
jellyfin.benoit.jp.net jellyfin.incus:8096 Tailscale / allowed IPs only
mail.benoit.jp.net mailcow.incus:80 Tailscale / allowed IPs only
scrutiny.benoit.jp.net scrutiny.incus:8080 Tailscale / allowed IPs only
uptime-kuma.benoit.jp.net mxmon (Tailscale peer) Tailscale / allowed IPs only

Key Features

  • Country-based ACLs: Per-country IP blocklists loaded from flat files (/etc/haproxy/country/*.txt). Used to restrict sensitive services to JP/FR or to block specific regions. The script to generate these files is at Benoit/Scripts.
  • PROXY protocol: TCP listens (mail ports, Forgejo SSH) forward with send-proxy so backends see the real client IP.
  • HSTS + security headers: All HTTPS backends set Strict-Transport-Security (1 year), X-Frame-Options, X-Content-Type-Options, and Referrer-Policy.
  • HTTP caching + compression: The default frontend caches cacheable responses and compresses text content with deflate/gzip.
  • www → non-www redirect: http-request redirect strips the www. prefix with a 301.

Firewall Setup

Why Not ufw?

Ubuntu ships ufw as its default firewall tool, but it uses legacy iptables under the hood. Tailscale also manages its own chains via iptables-nft, and mixing the two leads to unpredictable behavior. The clean solution is native nft directly.

Comparison of options on Ubuntu:

  • ufw: Default, easiest, but conflicts with native nftables rules from Tailscale, Docker, and WireGuard.
  • nft directly: What Ubuntu's own docs recommend for granular control. Write rules in /etc/nftables.conf or a custom script.
  • firewalld: Uses nftables natively, but is primarily the Red Hat/Fedora world's tool.

Coexisting with Tailscale

Tailscale manages its own table ip filter and table ip6 filter chains via iptables-nft. Running nft flush ruleset would wipe those chains until Tailscale restarts.

The correct approach is to add a separate table inet filter at priority filter + 10. Tailscale's chains run at priority 0 (the filter constant), so they accept Tailscale traffic first, then your inet filter acts as the gate for everything else.

/etc/nftables-local.sh
#!/bin/sh
# Adds inet filter table without touching Tailscale's ip/ip6 tables

nft add table inet filter 2>/dev/null || true

nft -f - <<'EOF'
table inet filter {

    chain input {
        type filter hook input priority filter + 10; policy drop;

        iif lo accept
        ct state established,related accept

        ip  protocol icmp   accept
        ip6 nexthdr  ipv6-icmp accept

        iif "tailscale0" accept
        udp dport 41641 accept

        tcp dport { 22, 25, 28, 80, 443, 465, 993, 4190 } accept
    }

    chain forward {
        type filter hook forward priority filter + 10; policy drop;

        iif "tailscale0" accept
        oif "tailscale0" accept
    }

    chain output {
        type filter hook output priority filter + 10; policy accept;
    }
}
EOF

echo "Rules loaded."
nft list ruleset

Adapt the port list

Update tcp dport { ... } to match the ports your server actually listens on.

After loading, verify the rule ordering with nft list ruleset. You should see Tailscale's table ip filter chains at priority filter (0) and your table inet filter at priority filter + 10.

Persistent via Systemd Service

Rather than saving with nft list ruleset > /etc/nftables.conf (which would also capture Tailscale's chains and cause conflicts on reboot), run the script as a oneshot service that starts after Tailscale:

Install the script

Install nftables setup script
sudo cp nftables-local.sh /etc/nftables-local.sh
sudo chmod 750 /etc/nftables-local.sh

Create the service unit

/etc/systemd/system/nftables-local.service
[Unit]
Description=Local nftables ruleset (inet filter)
After=network-pre.target tailscaled.service
Wants=tailscaled.service

[Service]
Type=oneshot
ExecStart=/etc/nftables-local.sh
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target

Enable and test

Enable the service
sudo systemctl daemon-reload
sudo systemctl enable nftables-local.service
sudo systemctl start nftables-local.service
sudo nft list ruleset

Rule ordering across reboots

On boot, nftables-local.service runs after tailscaled.service, so ordering is always correct. If Tailscale restarts at runtime, it rewrites only its own tables without touching your inet filter.