Sakura VPS¶
Role in My Infrastructure
The Sakura VPS is the public-facing entry point for all my self-hosted services. It runs HAProxy as a reverse proxy and connects to my home server via Tailscale, forwarding traffic through the encrypted VPN tunnel.
VPS Details¶
- Provider: Sakura Internet (さくらのVPS)
- Location: Tokyo, Japan
- OS: Ubuntu 24.04 LTS
- Role: Public reverse proxy (HAProxy) + Tailscale VPN gateway
Monitoring¶
The Sakura VPS sends an hourly Pulse ping to updown.io. If the ping stops arriving, updown.io triggers an SMS alert.
0 * * * * curl -sSo /dev/null -m 10 --retry 5 https://pulse.updown.io/<token>/<token>
SSH Access¶
sshd listens on port 28, IPv6 only (:::28). Two reasons:
- Bot noise reduction: automated scanners almost exclusively hammer port 22 on IPv4. An IPv6-only non-standard port receives virtually no unsolicited connection attempts.
- Tailscale fallback: if Tailscale goes down and the VPN tunnel is unreachable, direct SSH over IPv6 is still available as a recovery path.
HAProxy Configuration¶
HAProxy is the only process listening on public-facing ports. All traffic is forwarded through Tailscale to Incus containers on the home server. Full configuration at Benoit/HAProxy.
Ports¶
| Port | Protocol | Listener | Forwards to |
|---|---|---|---|
| 22 | TCP | HAProxy (listen ssh) |
forgejo.incus:10022 (Forgejo git SSH) |
| 25 | TCP | HAProxy (listen smtp) |
Mailcow SMTP |
| 80 | TCP | HAProxy frontend_default |
HTTP → HTTPS redirect (except retro.benoit.jp.net) |
| 443 | TCP | HAProxy frontend_default |
HTTPS, SSL termination with h2+http/1.1 |
| 465 | TCP | HAProxy (listen smtps) |
Mailcow SMTPS |
| 993 | TCP | HAProxy (listen imaps) |
Mailcow IMAPS |
| 4190 | TCP | HAProxy (listen sieve) |
Mailcow ManageSieve |
Web Backends¶
All HTTP/HTTPS traffic enters through a single frontend_default. Routing is done via hdr(host) ACLs:
| Hostname | Backend | Notes |
|---|---|---|
benoit.jp.net |
www.incus:80 |
Public |
forgejo.benoit.jp.net |
forgejo.incus:3000 |
Public |
mastodon.benoit.jp.net |
mastodon2.incus:80 |
Public, rate-limited |
retro.benoit.jp.net |
retro.incus:80 |
HTTP-only (no TLS, for old clients) |
navidrome.benoit.jp.net |
navidrome.incus:4533 |
JP only |
photoprism.benoit.jp.net |
photoprism.incus:2342 |
JP + FR only |
miniflux.benoit.jp.net |
miniflux.incus:8080 |
Tailscale / allowed IPs only |
kanboard.benoit.jp.net |
kanboard.incus:80 |
Tailscale / allowed IPs only |
vaultwarden.benoit.jp.net |
vaultwarden.incus:80 |
JP + allowed IPs only |
beszel.benoit.jp.net |
beszel.incus:8090 |
Tailscale / allowed IPs only |
jellyfin.benoit.jp.net |
jellyfin.incus:8096 |
Tailscale / allowed IPs only |
mail.benoit.jp.net |
mailcow.incus:80 |
Tailscale / allowed IPs only |
scrutiny.benoit.jp.net |
scrutiny.incus:8080 |
Tailscale / allowed IPs only |
uptime-kuma.benoit.jp.net |
mxmon (Tailscale peer) |
Tailscale / allowed IPs only |
Key Features¶
- Country-based ACLs: Per-country IP blocklists loaded from flat files (
/etc/haproxy/country/*.txt). Used to restrict sensitive services to JP/FR or to block specific regions. The script to generate these files is at Benoit/Scripts. - PROXY protocol: TCP listens (mail ports, Forgejo SSH) forward with
send-proxyso backends see the real client IP. - HSTS + security headers: All HTTPS backends set
Strict-Transport-Security(1 year),X-Frame-Options,X-Content-Type-Options, andReferrer-Policy. - HTTP caching + compression: The default frontend caches cacheable responses and compresses text content with deflate/gzip.
- www → non-www redirect:
http-request redirectstrips thewww.prefix with a 301.
TLS Certificates¶
Certificates are managed with Certbot using the standalone HTTP-01 challenge on a non-standard port, so HAProxy can keep listening on port 80.
Request a certificate
certbot certonly \
--standalone \
--non-interactive \
--agree-tos \
--email certbot@<your-domain> \# (1)!
--http-01-port=8899 \
--domains <app>.benoit.jp.net # (2)!
- Replace with your email address.
- Replace
<app>with the subdomain (e.g.forgejo,mastodon,dawarich).
HAProxy expects a single PEM file per domain containing both the certificate chain and the private key. The following script reads all Certbot certificates and assembles them into /etc/haproxy/crt/:
#!/bin/bash
# Directory where HAProxy expects certificate files
HAPROXY_CERT_DIR="/etc/haproxy/crt"
# Ensure the directory exists
mkdir -p "$HAPROXY_CERT_DIR"
# Get certbot certificates output
CERTBOT_OUTPUT=$(certbot certificates)
# Process each certificate block
echo "$CERTBOT_OUTPUT" | awk '
BEGIN { cert_name=""; fullchain=""; privkey=""; }
/Certificate Name:/ { cert_name=$3; }
/Certificate Path:/ { fullchain=$3; }
/Private Key Path:/ { privkey=$4;
if (cert_name && fullchain && privkey) {
print cert_name, fullchain, privkey;
cert_name=""; fullchain=""; privkey="";
}
}
' | while read -r cert_name fullchain privkey; do
if [[ -n "$cert_name" && -n "$fullchain" && -n "$privkey" ]]; then
output_cert="$HAPROXY_CERT_DIR/$cert_name.pem"
cat "$fullchain" "$privkey" > "$output_cert"
chmod 600 "$output_cert"
echo "Created $output_cert"
fi
done
Install and run the script
Automatic renewal
Certbot's systemd timer handles renewals automatically. Add haproxy_cert.sh && systemctl reload haproxy as a Certbot deploy hook to rebuild the PEM files and reload HAProxy after each renewal.
Firewall Setup¶
Why Not ufw?¶
Ubuntu ships ufw as its default firewall tool, but it uses legacy iptables under the hood. Tailscale also manages its own chains via iptables-nft, and mixing the two leads to unpredictable behavior. The clean solution is native nft directly.
Comparison of options on Ubuntu:
- ufw: Default, easiest, but conflicts with native nftables rules from Tailscale, Docker, and WireGuard.
- nft directly: What Ubuntu's own docs recommend for granular control. Write rules in
/etc/nftables.confor a custom script. - firewalld: Uses nftables natively, but is primarily the Red Hat/Fedora world's tool.
Coexisting with Tailscale¶
Tailscale manages its own table ip filter and table ip6 filter chains via iptables-nft. Running nft flush ruleset would wipe those chains until Tailscale restarts.
The correct approach is to add a separate table inet filter at priority filter + 10. Tailscale's chains run at priority 0 (the filter constant), so they accept Tailscale traffic first, then your inet filter acts as the gate for everything else.
#!/bin/sh
# Adds inet filter table without touching Tailscale's ip/ip6 tables
nft flush table inet filter 2>/dev/null
nft add table inet filter 2>/dev/null || true
nft -f - <<'EOF'
table inet filter {
chain input {
type filter hook input priority filter + 10; policy drop;
iif lo accept
ct state established,related accept
ip protocol icmp accept
ip6 nexthdr ipv6-icmp accept
iif "tailscale0" accept
udp dport 41641 accept
tcp dport { 22, 25, 28, 80, 443, 465, 993, 4190 } accept
}
chain forward {
type filter hook forward priority filter + 10; policy drop;
iif "tailscale0" accept
oif "tailscale0" accept
}
chain output {
type filter hook output priority filter + 10; policy accept;
}
}
EOF
echo "Rules loaded."
nft list ruleset
Adapt the port list
Update tcp dport { ... } to match the ports your server actually listens on.
After loading, verify the rule ordering with nft list ruleset. You should see Tailscale's table ip filter chains at priority filter (0) and your table inet filter at priority filter + 10.
Persistent via Systemd Service¶
Rather than saving with nft list ruleset > /etc/nftables.conf (which would also capture Tailscale's chains and cause conflicts on reboot), run the script as a oneshot service that starts after Tailscale:
Install the script
Create the service unit
Enable and test
Rule ordering and lifecycle
On boot, nftables-local.service runs after tailscaled.service, so ordering is always correct. BindsTo= ensures the firewall rules are reloaded whenever tailscaled restarts, which is necessary because nftables resolves interface names to kernel indexes at load time. If tailscale0 is recreated with a new index, the old rules become stale. See Post Mortem: nftables Stale Interface Index for the full story.
systemd-networkd and Tailscale¶
systemd-networkd flushes routing policy rules it considers foreign on every restart. Tailscale installs its own ip rules at priorities 5210 to 5270 for policy routing via table 52, and loses them each time networkd restarts. Tailscale's recovery path logs the flush and re-installs the ip rules, but does not always re-sync the throw routes inside table 52, which can leave advertised subnet routes broken until tailscaled is fully restarted. See Post Mortem: Tailscale Subnet Route Lost After networkd Restart for the full story.
The fix is to tell networkd to stop managing foreign routing policy rules:
With this in place, any future networkd restart leaves Tailscale's ip rules and policy routing alone, regardless of what triggered it (package upgrades, interface changes, manual reload).
zram Swap¶
The VPS has 1 GB of RAM. To improve performance under memory pressure, a compressed in-memory swap device is set up via zram-tools. zram creates a block device backed by compressed RAM (using lz4 by default), exposed as a swap partition at higher priority than the regular swapfile.
Current swap layout:
| Device | Type | Size | Priority |
|---|---|---|---|
/dev/zram0 |
zram (compressed RAM) | ~480 MB | 100 |
/swapfile |
file | 2 GB | 50 |
zram is used first (higher priority). The swapfile acts as a safety net for heavy memory pressure that exceeds the zram device capacity.
Configure
The configuration file controls the zram device size and compression algorithm:
# Fraction of RAM to use for zram (percent)
PERCENTAGE=50
# Compression algorithm: lz4 is a good balance of speed and ratio
ALGO=lz4
With 1 GB RAM, 50% gives ~480 MB of compressed swap. lz4 compresses fast and decompresses faster than zstd, which matters more than compression ratio for swap.
Disable zswap to avoid double compression
The kernel's zswap feature intercepts pages going to swap and compresses them in a pool. With zram already doing compression, zswap would compress data twice for no benefit. Disable it via kernel cmdline:
Reboot after this change for it to take effect.
Set swapfile priority in fstab
By default Ubuntu activates the swapfile with pri=-2, which is lower than zram's pri=100, but it is better to set it explicitly so the order RAM → zram → swapfile is guaranteed:
Verify the swap layout after starting: