Can any Hetzner user, please explain there workflow on Hetzner?

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • terraform-hcloud-kube-hetzner

    Optimized and Maintenance-free Kubernetes on Hetzner Cloud in one command!

  • It's not even close to major public cloud providers, but this is my setup:

    * https://github.com/kube-hetzner/terraform-hcloud-kube-hetzne... (Terraform, Kubernetes bootstrap)

    * Flux for CI

    * nginx-ingress + Hetzner Loadbalancer (thanks to https://github.com/hetznercloud/hcloud-cloud-controller-mana...)

    * Hetzner storage volumes (thanks to https://github.com/hetznercloud/csi-driver)

    Kube-Hetzner supports Hetzner Cloud loadbalancers and volumes out of the box, though it also supports other components.

  • It's not even close to major public cloud providers, but this is my setup:

    * https://github.com/kube-hetzner/terraform-hcloud-kube-hetzne... (Terraform, Kubernetes bootstrap)

    * Flux for CI

    * nginx-ingress + Hetzner Loadbalancer (thanks to https://github.com/hetznercloud/hcloud-cloud-controller-mana...)

    * Hetzner storage volumes (thanks to https://github.com/hetznercloud/csi-driver)

    Kube-Hetzner supports Hetzner Cloud loadbalancers and volumes out of the box, though it also supports other components.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • Caddy

    Fast and extensible multi-platform HTTP/1-2-3 web server with automatic HTTPS

  • I use Hetzner, Contabo, Time4VPS and other platforms in pretty much the same way (as IaaS VPS providers on top of which I run software, as opposed to SaaS/PaaS), but here's a quick glance at how I do things.

    > deploy from source repo? Terraform?

    Personally, I use Gitea for my repos and Drone CI for CI/CD.

    Gitea: https://gitea.io/en-us/

    Drone CI: https://www.drone.io/

    Some might prefer Woodpecker due to licensing: https://woodpecker-ci.org/ but honestly most solutions out there are okay, even Jenkins.

    Then I have some sort of a container cluster on the servers, so I can easily deploy things: I still like Docker Swarm (projects like CapRover might be nice to look at as well), though many might enjoy the likes of K3s or K0s more (lightweight Kubernetes clusters).

    Docker Swarm: https://docs.docker.com/engine/swarm/ (uses the Compose spec for manifests)

    K3s: https://k3s.io/

    K0s: https://k0sproject.io/ though MicroK8s and others are also okay.

    I also like having something like Portainer to have a GUI to manage the clusters: https://www.portainer.io/ for Kubernetes Rancher might offer more features, but will have a higher footprint

    It even supports webhooks, so I can do a POST request at the end of a CI run and the cluster will automatically pull and launch the latest tagged version of my apps: https://docs.portainer.io/user/docker/services/webhooks

    > keep software up to date? ex: Postgres, OS

    I build my own base container images and rebuild them (with recent package versions) on a regular basis, which is automatically scheduled: https://blog.kronis.dev/articles/using-ubuntu-as-the-base-fo...

    Drone CI makes this easy to have happen in the background, as long as I don't update across major versions, or Maven decides to release a new version and remove their old version .tar.gz archives from the downloads site for some reason, breaking my builds and making me update the URL: https://docs.drone.io/cron/

    Some images like databases etc. I just proxy to my Nexus instance, version upgrades are relatively painless most of the time, at least as long as I've set up the persistent data directories correctly.

    > do load balancing? built-in load balancer?

    This is a bit more tricky. I use Apache2 with mod_md to get Let's Encrypt certificates and Docker Swarm networking for directing the incoming traffic across the services: https://blog.kronis.dev/tutorials/how-and-why-to-use-apache-...

    Some might prefer Caddy, which is another great web server with automatic HTTPS: https://caddyserver.com/ but the Apache modules do pretty much everything I need and the performance has never actually been too bad for my needs. Up until now, applications themselves have always been the bottleneck, actually working on a blog post about comparing some web servers in real world circumstances.

    However, making things a bit more failure resilient might involve just paying Hetzner (in this case) to give you a load balancer: https://www.hetzner.com/cloud/load-balancer which will make everything less painless once you need to scale.

    Why? Because doing round robin DNS with the ACME certificate directory accessible and synchronized across multiple servers is a nuisance, although servers like Caddy attempt to get this working: https://caddyserver.com/docs/automatic-https#storage You could also get DNS-01 challenges working, but that needs even more work and integration with setting up TXT records. Even if you have multiple servers for resiliency, not all clients would try all of the IP addresses if one of the servers is down, although browsers should: https://webmasters.stackexchange.com/a/12704

    So if you care about HTTPS certificates and want to do it yourself with multiple servers having the same hostname, you'll either need to get DNS-01 working, do some messing around with shared directories (which may or may not actually work), or will just need to get a regular commercial cert that you'd manually propagate to all of the web servers.

    From there on out it should be a regular reverse proxy setup, in my case Docker Swarm takes care of the service discovery (hostnames that I can access).

    > handle scaling? Terraform?

    None, I manually provision how many nodes I need, mostly because I'm too broke to hand over my wallet to automation.

    They have an API that you or someone else could probably hook up: https://docs.hetzner.cloud/

    > automate backups? ex: databases, storage. Do you use provided backups and snapshots?

    I use bind mounts for all of my containers for persistent storage, so the data is accessible on the host directly.

    Then I use something like BackupPC to connect to those servers (SSH/rsync) and pull data to my own backup node, which then compresses and deduplicates the data: https://backuppc.github.io/backuppc/

    It was a pain to setup, but it works really well and has saved my hide dozens of times. Some might enjoy Bacula more: https://www.bacula.org/

    > maintain security? built-in firewall and DDoS protection?

    I personally use Apache2 with ModSecurity and the OWASP ruleset, to act as a lightweight WAF: https://owasp.org/www-project-modsecurity-core-rule-set/

    You might want to just cave in and go with Cloudflare for the most part, though: https://www.cloudflare.com/waf/

  • csi-driver

    Kubernetes Container Storage Interface driver for Hetzner Cloud Volumes

  • It's not even close to major public cloud providers, but this is my setup:

    * https://github.com/kube-hetzner/terraform-hcloud-kube-hetzne... (Terraform, Kubernetes bootstrap)

    * Flux for CI

    * nginx-ingress + Hetzner Loadbalancer (thanks to https://github.com/hetznercloud/hcloud-cloud-controller-mana...)

    * Hetzner storage volumes (thanks to https://github.com/hetznercloud/csi-driver)

    Kube-Hetzner supports Hetzner Cloud loadbalancers and volumes out of the box, though it also supports other components.

  • awesome-hcloud

    A curated list of awesome libraries, tools, and integrations for Hetzner Cloud

  • Portainer

    Making Docker and Kubernetes management easy.

  • I use Hetzner, Contabo, Time4VPS and other platforms in pretty much the same way (as IaaS VPS providers on top of which I run software, as opposed to SaaS/PaaS), but here's a quick glance at how I do things.

    > deploy from source repo? Terraform?

    Personally, I use Gitea for my repos and Drone CI for CI/CD.

    Gitea: https://gitea.io/en-us/

    Drone CI: https://www.drone.io/

    Some might prefer Woodpecker due to licensing: https://woodpecker-ci.org/ but honestly most solutions out there are okay, even Jenkins.

    Then I have some sort of a container cluster on the servers, so I can easily deploy things: I still like Docker Swarm (projects like CapRover might be nice to look at as well), though many might enjoy the likes of K3s or K0s more (lightweight Kubernetes clusters).

    Docker Swarm: https://docs.docker.com/engine/swarm/ (uses the Compose spec for manifests)

    K3s: https://k3s.io/

    K0s: https://k0sproject.io/ though MicroK8s and others are also okay.

    I also like having something like Portainer to have a GUI to manage the clusters: https://www.portainer.io/ for Kubernetes Rancher might offer more features, but will have a higher footprint

    It even supports webhooks, so I can do a POST request at the end of a CI run and the cluster will automatically pull and launch the latest tagged version of my apps: https://docs.portainer.io/user/docker/services/webhooks

    > keep software up to date? ex: Postgres, OS

    I build my own base container images and rebuild them (with recent package versions) on a regular basis, which is automatically scheduled: https://blog.kronis.dev/articles/using-ubuntu-as-the-base-fo...

    Drone CI makes this easy to have happen in the background, as long as I don't update across major versions, or Maven decides to release a new version and remove their old version .tar.gz archives from the downloads site for some reason, breaking my builds and making me update the URL: https://docs.drone.io/cron/

    Some images like databases etc. I just proxy to my Nexus instance, version upgrades are relatively painless most of the time, at least as long as I've set up the persistent data directories correctly.

    > do load balancing? built-in load balancer?

    This is a bit more tricky. I use Apache2 with mod_md to get Let's Encrypt certificates and Docker Swarm networking for directing the incoming traffic across the services: https://blog.kronis.dev/tutorials/how-and-why-to-use-apache-...

    Some might prefer Caddy, which is another great web server with automatic HTTPS: https://caddyserver.com/ but the Apache modules do pretty much everything I need and the performance has never actually been too bad for my needs. Up until now, applications themselves have always been the bottleneck, actually working on a blog post about comparing some web servers in real world circumstances.

    However, making things a bit more failure resilient might involve just paying Hetzner (in this case) to give you a load balancer: https://www.hetzner.com/cloud/load-balancer which will make everything less painless once you need to scale.

    Why? Because doing round robin DNS with the ACME certificate directory accessible and synchronized across multiple servers is a nuisance, although servers like Caddy attempt to get this working: https://caddyserver.com/docs/automatic-https#storage You could also get DNS-01 challenges working, but that needs even more work and integration with setting up TXT records. Even if you have multiple servers for resiliency, not all clients would try all of the IP addresses if one of the servers is down, although browsers should: https://webmasters.stackexchange.com/a/12704

    So if you care about HTTPS certificates and want to do it yourself with multiple servers having the same hostname, you'll either need to get DNS-01 working, do some messing around with shared directories (which may or may not actually work), or will just need to get a regular commercial cert that you'd manually propagate to all of the web servers.

    From there on out it should be a regular reverse proxy setup, in my case Docker Swarm takes care of the service discovery (hostnames that I can access).

    > handle scaling? Terraform?

    None, I manually provision how many nodes I need, mostly because I'm too broke to hand over my wallet to automation.

    They have an API that you or someone else could probably hook up: https://docs.hetzner.cloud/

    > automate backups? ex: databases, storage. Do you use provided backups and snapshots?

    I use bind mounts for all of my containers for persistent storage, so the data is accessible on the host directly.

    Then I use something like BackupPC to connect to those servers (SSH/rsync) and pull data to my own backup node, which then compresses and deduplicates the data: https://backuppc.github.io/backuppc/

    It was a pain to setup, but it works really well and has saved my hide dozens of times. Some might enjoy Bacula more: https://www.bacula.org/

    > maintain security? built-in firewall and DDoS protection?

    I personally use Apache2 with ModSecurity and the OWASP ruleset, to act as a lightweight WAF: https://owasp.org/www-project-modsecurity-core-rule-set/

    You might want to just cave in and go with Cloudflare for the most part, though: https://www.cloudflare.com/waf/

  • woodpecker

    Woodpecker is a simple yet powerful CI/CD engine with great extensibility.

  • I use Hetzner, Contabo, Time4VPS and other platforms in pretty much the same way (as IaaS VPS providers on top of which I run software, as opposed to SaaS/PaaS), but here's a quick glance at how I do things.

    > deploy from source repo? Terraform?

    Personally, I use Gitea for my repos and Drone CI for CI/CD.

    Gitea: https://gitea.io/en-us/

    Drone CI: https://www.drone.io/

    Some might prefer Woodpecker due to licensing: https://woodpecker-ci.org/ but honestly most solutions out there are okay, even Jenkins.

    Then I have some sort of a container cluster on the servers, so I can easily deploy things: I still like Docker Swarm (projects like CapRover might be nice to look at as well), though many might enjoy the likes of K3s or K0s more (lightweight Kubernetes clusters).

    Docker Swarm: https://docs.docker.com/engine/swarm/ (uses the Compose spec for manifests)

    K3s: https://k3s.io/

    K0s: https://k0sproject.io/ though MicroK8s and others are also okay.

    I also like having something like Portainer to have a GUI to manage the clusters: https://www.portainer.io/ for Kubernetes Rancher might offer more features, but will have a higher footprint

    It even supports webhooks, so I can do a POST request at the end of a CI run and the cluster will automatically pull and launch the latest tagged version of my apps: https://docs.portainer.io/user/docker/services/webhooks

    > keep software up to date? ex: Postgres, OS

    I build my own base container images and rebuild them (with recent package versions) on a regular basis, which is automatically scheduled: https://blog.kronis.dev/articles/using-ubuntu-as-the-base-fo...

    Drone CI makes this easy to have happen in the background, as long as I don't update across major versions, or Maven decides to release a new version and remove their old version .tar.gz archives from the downloads site for some reason, breaking my builds and making me update the URL: https://docs.drone.io/cron/

    Some images like databases etc. I just proxy to my Nexus instance, version upgrades are relatively painless most of the time, at least as long as I've set up the persistent data directories correctly.

    > do load balancing? built-in load balancer?

    This is a bit more tricky. I use Apache2 with mod_md to get Let's Encrypt certificates and Docker Swarm networking for directing the incoming traffic across the services: https://blog.kronis.dev/tutorials/how-and-why-to-use-apache-...

    Some might prefer Caddy, which is another great web server with automatic HTTPS: https://caddyserver.com/ but the Apache modules do pretty much everything I need and the performance has never actually been too bad for my needs. Up until now, applications themselves have always been the bottleneck, actually working on a blog post about comparing some web servers in real world circumstances.

    However, making things a bit more failure resilient might involve just paying Hetzner (in this case) to give you a load balancer: https://www.hetzner.com/cloud/load-balancer which will make everything less painless once you need to scale.

    Why? Because doing round robin DNS with the ACME certificate directory accessible and synchronized across multiple servers is a nuisance, although servers like Caddy attempt to get this working: https://caddyserver.com/docs/automatic-https#storage You could also get DNS-01 challenges working, but that needs even more work and integration with setting up TXT records. Even if you have multiple servers for resiliency, not all clients would try all of the IP addresses if one of the servers is down, although browsers should: https://webmasters.stackexchange.com/a/12704

    So if you care about HTTPS certificates and want to do it yourself with multiple servers having the same hostname, you'll either need to get DNS-01 working, do some messing around with shared directories (which may or may not actually work), or will just need to get a regular commercial cert that you'd manually propagate to all of the web servers.

    From there on out it should be a regular reverse proxy setup, in my case Docker Swarm takes care of the service discovery (hostnames that I can access).

    > handle scaling? Terraform?

    None, I manually provision how many nodes I need, mostly because I'm too broke to hand over my wallet to automation.

    They have an API that you or someone else could probably hook up: https://docs.hetzner.cloud/

    > automate backups? ex: databases, storage. Do you use provided backups and snapshots?

    I use bind mounts for all of my containers for persistent storage, so the data is accessible on the host directly.

    Then I use something like BackupPC to connect to those servers (SSH/rsync) and pull data to my own backup node, which then compresses and deduplicates the data: https://backuppc.github.io/backuppc/

    It was a pain to setup, but it works really well and has saved my hide dozens of times. Some might enjoy Bacula more: https://www.bacula.org/

    > maintain security? built-in firewall and DDoS protection?

    I personally use Apache2 with ModSecurity and the OWASP ruleset, to act as a lightweight WAF: https://owasp.org/www-project-modsecurity-core-rule-set/

    You might want to just cave in and go with Cloudflare for the most part, though: https://www.cloudflare.com/waf/

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • k3s

    Lightweight Kubernetes

  • I use Hetzner, Contabo, Time4VPS and other platforms in pretty much the same way (as IaaS VPS providers on top of which I run software, as opposed to SaaS/PaaS), but here's a quick glance at how I do things.

    > deploy from source repo? Terraform?

    Personally, I use Gitea for my repos and Drone CI for CI/CD.

    Gitea: https://gitea.io/en-us/

    Drone CI: https://www.drone.io/

    Some might prefer Woodpecker due to licensing: https://woodpecker-ci.org/ but honestly most solutions out there are okay, even Jenkins.

    Then I have some sort of a container cluster on the servers, so I can easily deploy things: I still like Docker Swarm (projects like CapRover might be nice to look at as well), though many might enjoy the likes of K3s or K0s more (lightweight Kubernetes clusters).

    Docker Swarm: https://docs.docker.com/engine/swarm/ (uses the Compose spec for manifests)

    K3s: https://k3s.io/

    K0s: https://k0sproject.io/ though MicroK8s and others are also okay.

    I also like having something like Portainer to have a GUI to manage the clusters: https://www.portainer.io/ for Kubernetes Rancher might offer more features, but will have a higher footprint

    It even supports webhooks, so I can do a POST request at the end of a CI run and the cluster will automatically pull and launch the latest tagged version of my apps: https://docs.portainer.io/user/docker/services/webhooks

    > keep software up to date? ex: Postgres, OS

    I build my own base container images and rebuild them (with recent package versions) on a regular basis, which is automatically scheduled: https://blog.kronis.dev/articles/using-ubuntu-as-the-base-fo...

    Drone CI makes this easy to have happen in the background, as long as I don't update across major versions, or Maven decides to release a new version and remove their old version .tar.gz archives from the downloads site for some reason, breaking my builds and making me update the URL: https://docs.drone.io/cron/

    Some images like databases etc. I just proxy to my Nexus instance, version upgrades are relatively painless most of the time, at least as long as I've set up the persistent data directories correctly.

    > do load balancing? built-in load balancer?

    This is a bit more tricky. I use Apache2 with mod_md to get Let's Encrypt certificates and Docker Swarm networking for directing the incoming traffic across the services: https://blog.kronis.dev/tutorials/how-and-why-to-use-apache-...

    Some might prefer Caddy, which is another great web server with automatic HTTPS: https://caddyserver.com/ but the Apache modules do pretty much everything I need and the performance has never actually been too bad for my needs. Up until now, applications themselves have always been the bottleneck, actually working on a blog post about comparing some web servers in real world circumstances.

    However, making things a bit more failure resilient might involve just paying Hetzner (in this case) to give you a load balancer: https://www.hetzner.com/cloud/load-balancer which will make everything less painless once you need to scale.

    Why? Because doing round robin DNS with the ACME certificate directory accessible and synchronized across multiple servers is a nuisance, although servers like Caddy attempt to get this working: https://caddyserver.com/docs/automatic-https#storage You could also get DNS-01 challenges working, but that needs even more work and integration with setting up TXT records. Even if you have multiple servers for resiliency, not all clients would try all of the IP addresses if one of the servers is down, although browsers should: https://webmasters.stackexchange.com/a/12704

    So if you care about HTTPS certificates and want to do it yourself with multiple servers having the same hostname, you'll either need to get DNS-01 working, do some messing around with shared directories (which may or may not actually work), or will just need to get a regular commercial cert that you'd manually propagate to all of the web servers.

    From there on out it should be a regular reverse proxy setup, in my case Docker Swarm takes care of the service discovery (hostnames that I can access).

    > handle scaling? Terraform?

    None, I manually provision how many nodes I need, mostly because I'm too broke to hand over my wallet to automation.

    They have an API that you or someone else could probably hook up: https://docs.hetzner.cloud/

    > automate backups? ex: databases, storage. Do you use provided backups and snapshots?

    I use bind mounts for all of my containers for persistent storage, so the data is accessible on the host directly.

    Then I use something like BackupPC to connect to those servers (SSH/rsync) and pull data to my own backup node, which then compresses and deduplicates the data: https://backuppc.github.io/backuppc/

    It was a pain to setup, but it works really well and has saved my hide dozens of times. Some might enjoy Bacula more: https://www.bacula.org/

    > maintain security? built-in firewall and DDoS protection?

    I personally use Apache2 with ModSecurity and the OWASP ruleset, to act as a lightweight WAF: https://owasp.org/www-project-modsecurity-core-rule-set/

    You might want to just cave in and go with Cloudflare for the most part, though: https://www.cloudflare.com/waf/

  • hcloud-cloud-controller-manager

    Kubernetes cloud-controller-manager for Hetzner Cloud

  • It's not even close to major public cloud providers, but this is my setup:

    * https://github.com/kube-hetzner/terraform-hcloud-kube-hetzne... (Terraform, Kubernetes bootstrap)

    * Flux for CI

    * nginx-ingress + Hetzner Loadbalancer (thanks to https://github.com/hetznercloud/hcloud-cloud-controller-mana...)

    * Hetzner storage volumes (thanks to https://github.com/hetznercloud/csi-driver)

    Kube-Hetzner supports Hetzner Cloud loadbalancers and volumes out of the box, though it also supports other components.

  • It's not even close to major public cloud providers, but this is my setup:

    * https://github.com/kube-hetzner/terraform-hcloud-kube-hetzne... (Terraform, Kubernetes bootstrap)

    * Flux for CI

    * nginx-ingress + Hetzner Loadbalancer (thanks to https://github.com/hetznercloud/hcloud-cloud-controller-mana...)

    * Hetzner storage volumes (thanks to https://github.com/hetznercloud/csi-driver)

    Kube-Hetzner supports Hetzner Cloud loadbalancers and volumes out of the box, though it also supports other components.

  • honey-swarm

    Setup a full fledged portainer + Traefik swam cluster with ansible playbooks and a few VPS

  • I've been using docker swarm + traefik + portainer and I'm quite happy. I orchestrate everything with Ansible [1]. The only manual process I have is provisioning the servers / load balancers.

    It provides a super nice balance between going all manual VPS and going all on the kubernetes cool aid

    [1] https://github.com/sergioisidoro/honey-swarm

  • swarmsible

    Ansible based Tooling and production grade example Docker Stacks. Updated with new learnings from running Docker Swarm in production

  • We use Docker Swarm for our deployments, so I will answer the questions based on that.

    We have built some tooling around setting up and maintaining the swarm using ansible [0]. We also added some Hetzner flavour to that [1] which allows us to automatically spin up completely new clusters in a really short amount of time.

    deploy from source repo:

    - We use Azure DevOps pipelines that automate deployments based on environment configs living in an encrypted state in Git repos. We use [2] and [3] to make it easier to organize the deployments using `docker stack deploy` under the hood.

    keep software up to date:

    - We are currently looking into CVE scanners that export into prometheus to give us an idea of what we should update

    load balancing:

    - depending on the project, Hetzner LB or Cloudflare

    handle scaling:

    - manually, but i would love to build some autoscaler for swarm that interacts with our tooling [0] and [1]

    automate backups:

    - docker swarm cronjobs either via jobs with restart condition and a delay or [4]

    maintain security:

    - Hetzner LB is front facing. Communication is done via encrypted networks inside Hetzner private cloud networks

    - [0] https://github.com/neuroforgede/swarmsible

  • swarmsible-hetzner

    Companion repository for https://github.com/neuroforgede/swarmsible with a focus on usage in the Hetzner cloud

  • nothelm.py

    nothelm.py - opinionated docker stack project tool with templating support

  • docker-stack-deploy

    Utility to improve docker stack deploy

  • docker-volume-hetzner

    Docker Volume Plugin for accessing Hetzner Cloud Volumes

  • we use https://github.com/costela/docker-volume-hetzner which is really stable.

    CSI support for Swarm is in beta as well and already merged in the Hetzner CSI driver (https://github.com/hetznercloud/csi-driver/tree/main/deploy/...). There are some rough edges atm with Docker + CSI so I would stick with docker-volume-hetzner for now for prod usage.

    Disclaimer: I contributed to both repos.

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts