Our great sponsors
- CodiumAI - TestGPT | Generating meaningful tests for busy devs
- ONLYOFFICE ONLYOFFICE Docs — document collaboration in your environment
- InfluxDB - Access the most powerful time series database as a service
- SonarLint - Clean code begins in your IDE with SonarLint
-
terraform-hcloud-kube-hetzner
Optimized and Maintenance-free Kubernetes on Hetzner Cloud in one command!
It's not even close to major public cloud providers, but this is my setup:
* https://github.com/kube-hetzner/terraform-hcloud-kube-hetzne... (Terraform, Kubernetes bootstrap)
* Flux for CI
* nginx-ingress + Hetzner Loadbalancer (thanks to https://github.com/hetznercloud/hcloud-cloud-controller-mana...)
* Hetzner storage volumes (thanks to https://github.com/hetznercloud/csi-driver)
Kube-Hetzner supports Hetzner Cloud loadbalancers and volumes out of the box, though it also supports other components.
-
It's not even close to major public cloud providers, but this is my setup:
* https://github.com/kube-hetzner/terraform-hcloud-kube-hetzne... (Terraform, Kubernetes bootstrap)
* Flux for CI
* nginx-ingress + Hetzner Loadbalancer (thanks to https://github.com/hetznercloud/hcloud-cloud-controller-mana...)
* Hetzner storage volumes (thanks to https://github.com/hetznercloud/csi-driver)
Kube-Hetzner supports Hetzner Cloud loadbalancers and volumes out of the box, though it also supports other components.
-
CodiumAI
TestGPT | Generating meaningful tests for busy devs. Get non-trivial tests (and trivial, too!) suggested right inside your IDE, so you can code smart, create more value, and stay confident when you push.
-
I use Hetzner, Contabo, Time4VPS and other platforms in pretty much the same way (as IaaS VPS providers on top of which I run software, as opposed to SaaS/PaaS), but here's a quick glance at how I do things.
> deploy from source repo? Terraform?
Personally, I use Gitea for my repos and Drone CI for CI/CD.
Gitea: https://gitea.io/en-us/
Drone CI: https://www.drone.io/
Some might prefer Woodpecker due to licensing: https://woodpecker-ci.org/ but honestly most solutions out there are okay, even Jenkins.
Then I have some sort of a container cluster on the servers, so I can easily deploy things: I still like Docker Swarm (projects like CapRover might be nice to look at as well), though many might enjoy the likes of K3s or K0s more (lightweight Kubernetes clusters).
Docker Swarm: https://docs.docker.com/engine/swarm/ (uses the Compose spec for manifests)
K3s: https://k3s.io/
K0s: https://k0sproject.io/ though MicroK8s and others are also okay.
I also like having something like Portainer to have a GUI to manage the clusters: https://www.portainer.io/ for Kubernetes Rancher might offer more features, but will have a higher footprint
It even supports webhooks, so I can do a POST request at the end of a CI run and the cluster will automatically pull and launch the latest tagged version of my apps: https://docs.portainer.io/user/docker/services/webhooks
> keep software up to date? ex: Postgres, OS
I build my own base container images and rebuild them (with recent package versions) on a regular basis, which is automatically scheduled: https://blog.kronis.dev/articles/using-ubuntu-as-the-base-fo...
Drone CI makes this easy to have happen in the background, as long as I don't update across major versions, or Maven decides to release a new version and remove their old version .tar.gz archives from the downloads site for some reason, breaking my builds and making me update the URL: https://docs.drone.io/cron/
Some images like databases etc. I just proxy to my Nexus instance, version upgrades are relatively painless most of the time, at least as long as I've set up the persistent data directories correctly.
> do load balancing? built-in load balancer?
This is a bit more tricky. I use Apache2 with mod_md to get Let's Encrypt certificates and Docker Swarm networking for directing the incoming traffic across the services: https://blog.kronis.dev/tutorials/how-and-why-to-use-apache-...
Some might prefer Caddy, which is another great web server with automatic HTTPS: https://caddyserver.com/ but the Apache modules do pretty much everything I need and the performance has never actually been too bad for my needs. Up until now, applications themselves have always been the bottleneck, actually working on a blog post about comparing some web servers in real world circumstances.
However, making things a bit more failure resilient might involve just paying Hetzner (in this case) to give you a load balancer: https://www.hetzner.com/cloud/load-balancer which will make everything less painless once you need to scale.
Why? Because doing round robin DNS with the ACME certificate directory accessible and synchronized across multiple servers is a nuisance, although servers like Caddy attempt to get this working: https://caddyserver.com/docs/automatic-https#storage You could also get DNS-01 challenges working, but that needs even more work and integration with setting up TXT records. Even if you have multiple servers for resiliency, not all clients would try all of the IP addresses if one of the servers is down, although browsers should: https://webmasters.stackexchange.com/a/12704
So if you care about HTTPS certificates and want to do it yourself with multiple servers having the same hostname, you'll either need to get DNS-01 working, do some messing around with shared directories (which may or may not actually work), or will just need to get a regular commercial cert that you'd manually propagate to all of the web servers.
From there on out it should be a regular reverse proxy setup, in my case Docker Swarm takes care of the service discovery (hostnames that I can access).
> handle scaling? Terraform?
None, I manually provision how many nodes I need, mostly because I'm too broke to hand over my wallet to automation.
They have an API that you or someone else could probably hook up: https://docs.hetzner.cloud/
> automate backups? ex: databases, storage. Do you use provided backups and snapshots?
I use bind mounts for all of my containers for persistent storage, so the data is accessible on the host directly.
Then I use something like BackupPC to connect to those servers (SSH/rsync) and pull data to my own backup node, which then compresses and deduplicates the data: https://backuppc.github.io/backuppc/
It was a pain to setup, but it works really well and has saved my hide dozens of times. Some might enjoy Bacula more: https://www.bacula.org/
> maintain security? built-in firewall and DDoS protection?
I personally use Apache2 with ModSecurity and the OWASP ruleset, to act as a lightweight WAF: https://owasp.org/www-project-modsecurity-core-rule-set/
You might want to just cave in and go with Cloudflare for the most part, though: https://www.cloudflare.com/waf/
-
It's not even close to major public cloud providers, but this is my setup:
* https://github.com/kube-hetzner/terraform-hcloud-kube-hetzne... (Terraform, Kubernetes bootstrap)
* Flux for CI
* nginx-ingress + Hetzner Loadbalancer (thanks to https://github.com/hetznercloud/hcloud-cloud-controller-mana...)
* Hetzner storage volumes (thanks to https://github.com/hetznercloud/csi-driver)
Kube-Hetzner supports Hetzner Cloud loadbalancers and volumes out of the box, though it also supports other components.
-
-
I use Hetzner, Contabo, Time4VPS and other platforms in pretty much the same way (as IaaS VPS providers on top of which I run software, as opposed to SaaS/PaaS), but here's a quick glance at how I do things.
> deploy from source repo? Terraform?
Personally, I use Gitea for my repos and Drone CI for CI/CD.
Gitea: https://gitea.io/en-us/
Drone CI: https://www.drone.io/
Some might prefer Woodpecker due to licensing: https://woodpecker-ci.org/ but honestly most solutions out there are okay, even Jenkins.
Then I have some sort of a container cluster on the servers, so I can easily deploy things: I still like Docker Swarm (projects like CapRover might be nice to look at as well), though many might enjoy the likes of K3s or K0s more (lightweight Kubernetes clusters).
Docker Swarm: https://docs.docker.com/engine/swarm/ (uses the Compose spec for manifests)
K3s: https://k3s.io/
K0s: https://k0sproject.io/ though MicroK8s and others are also okay.
I also like having something like Portainer to have a GUI to manage the clusters: https://www.portainer.io/ for Kubernetes Rancher might offer more features, but will have a higher footprint
It even supports webhooks, so I can do a POST request at the end of a CI run and the cluster will automatically pull and launch the latest tagged version of my apps: https://docs.portainer.io/user/docker/services/webhooks
> keep software up to date? ex: Postgres, OS
I build my own base container images and rebuild them (with recent package versions) on a regular basis, which is automatically scheduled: https://blog.kronis.dev/articles/using-ubuntu-as-the-base-fo...
Drone CI makes this easy to have happen in the background, as long as I don't update across major versions, or Maven decides to release a new version and remove their old version .tar.gz archives from the downloads site for some reason, breaking my builds and making me update the URL: https://docs.drone.io/cron/
Some images like databases etc. I just proxy to my Nexus instance, version upgrades are relatively painless most of the time, at least as long as I've set up the persistent data directories correctly.
> do load balancing? built-in load balancer?
This is a bit more tricky. I use Apache2 with mod_md to get Let's Encrypt certificates and Docker Swarm networking for directing the incoming traffic across the services: https://blog.kronis.dev/tutorials/how-and-why-to-use-apache-...
Some might prefer Caddy, which is another great web server with automatic HTTPS: https://caddyserver.com/ but the Apache modules do pretty much everything I need and the performance has never actually been too bad for my needs. Up until now, applications themselves have always been the bottleneck, actually working on a blog post about comparing some web servers in real world circumstances.
However, making things a bit more failure resilient might involve just paying Hetzner (in this case) to give you a load balancer: https://www.hetzner.com/cloud/load-balancer which will make everything less painless once you need to scale.
Why? Because doing round robin DNS with the ACME certificate directory accessible and synchronized across multiple servers is a nuisance, although servers like Caddy attempt to get this working: https://caddyserver.com/docs/automatic-https#storage You could also get DNS-01 challenges working, but that needs even more work and integration with setting up TXT records. Even if you have multiple servers for resiliency, not all clients would try all of the IP addresses if one of the servers is down, although browsers should: https://webmasters.stackexchange.com/a/12704
So if you care about HTTPS certificates and want to do it yourself with multiple servers having the same hostname, you'll either need to get DNS-01 working, do some messing around with shared directories (which may or may not actually work), or will just need to get a regular commercial cert that you'd manually propagate to all of the web servers.
From there on out it should be a regular reverse proxy setup, in my case Docker Swarm takes care of the service discovery (hostnames that I can access).
> handle scaling? Terraform?
None, I manually provision how many nodes I need, mostly because I'm too broke to hand over my wallet to automation.
They have an API that you or someone else could probably hook up: https://docs.hetzner.cloud/
> automate backups? ex: databases, storage. Do you use provided backups and snapshots?
I use bind mounts for all of my containers for persistent storage, so the data is accessible on the host directly.
Then I use something like BackupPC to connect to those servers (SSH/rsync) and pull data to my own backup node, which then compresses and deduplicates the data: https://backuppc.github.io/backuppc/
It was a pain to setup, but it works really well and has saved my hide dozens of times. Some might enjoy Bacula more: https://www.bacula.org/
> maintain security? built-in firewall and DDoS protection?
I personally use Apache2 with ModSecurity and the OWASP ruleset, to act as a lightweight WAF: https://owasp.org/www-project-modsecurity-core-rule-set/
You might want to just cave in and go with Cloudflare for the most part, though: https://www.cloudflare.com/waf/
-
I use Hetzner, Contabo, Time4VPS and other platforms in pretty much the same way (as IaaS VPS providers on top of which I run software, as opposed to SaaS/PaaS), but here's a quick glance at how I do things.
> deploy from source repo? Terraform?
Personally, I use Gitea for my repos and Drone CI for CI/CD.
Gitea: https://gitea.io/en-us/
Drone CI: https://www.drone.io/
Some might prefer Woodpecker due to licensing: https://woodpecker-ci.org/ but honestly most solutions out there are okay, even Jenkins.
Then I have some sort of a container cluster on the servers, so I can easily deploy things: I still like Docker Swarm (projects like CapRover might be nice to look at as well), though many might enjoy the likes of K3s or K0s more (lightweight Kubernetes clusters).
Docker Swarm: https://docs.docker.com/engine/swarm/ (uses the Compose spec for manifests)
K3s: https://k3s.io/
K0s: https://k0sproject.io/ though MicroK8s and others are also okay.
I also like having something like Portainer to have a GUI to manage the clusters: https://www.portainer.io/ for Kubernetes Rancher might offer more features, but will have a higher footprint
It even supports webhooks, so I can do a POST request at the end of a CI run and the cluster will automatically pull and launch the latest tagged version of my apps: https://docs.portainer.io/user/docker/services/webhooks
> keep software up to date? ex: Postgres, OS
I build my own base container images and rebuild them (with recent package versions) on a regular basis, which is automatically scheduled: https://blog.kronis.dev/articles/using-ubuntu-as-the-base-fo...
Drone CI makes this easy to have happen in the background, as long as I don't update across major versions, or Maven decides to release a new version and remove their old version .tar.gz archives from the downloads site for some reason, breaking my builds and making me update the URL: https://docs.drone.io/cron/
Some images like databases etc. I just proxy to my Nexus instance, version upgrades are relatively painless most of the time, at least as long as I've set up the persistent data directories correctly.
> do load balancing? built-in load balancer?
This is a bit more tricky. I use Apache2 with mod_md to get Let's Encrypt certificates and Docker Swarm networking for directing the incoming traffic across the services: https://blog.kronis.dev/tutorials/how-and-why-to-use-apache-...
Some might prefer Caddy, which is another great web server with automatic HTTPS: https://caddyserver.com/ but the Apache modules do pretty much everything I need and the performance has never actually been too bad for my needs. Up until now, applications themselves have always been the bottleneck, actually working on a blog post about comparing some web servers in real world circumstances.
However, making things a bit more failure resilient might involve just paying Hetzner (in this case) to give you a load balancer: https://www.hetzner.com/cloud/load-balancer which will make everything less painless once you need to scale.
Why? Because doing round robin DNS with the ACME certificate directory accessible and synchronized across multiple servers is a nuisance, although servers like Caddy attempt to get this working: https://caddyserver.com/docs/automatic-https#storage You could also get DNS-01 challenges working, but that needs even more work and integration with setting up TXT records. Even if you have multiple servers for resiliency, not all clients would try all of the IP addresses if one of the servers is down, although browsers should: https://webmasters.stackexchange.com/a/12704
So if you care about HTTPS certificates and want to do it yourself with multiple servers having the same hostname, you'll either need to get DNS-01 working, do some messing around with shared directories (which may or may not actually work), or will just need to get a regular commercial cert that you'd manually propagate to all of the web servers.
From there on out it should be a regular reverse proxy setup, in my case Docker Swarm takes care of the service discovery (hostnames that I can access).
> handle scaling? Terraform?
None, I manually provision how many nodes I need, mostly because I'm too broke to hand over my wallet to automation.
They have an API that you or someone else could probably hook up: https://docs.hetzner.cloud/
> automate backups? ex: databases, storage. Do you use provided backups and snapshots?
I use bind mounts for all of my containers for persistent storage, so the data is accessible on the host directly.
Then I use something like BackupPC to connect to those servers (SSH/rsync) and pull data to my own backup node, which then compresses and deduplicates the data: https://backuppc.github.io/backuppc/
It was a pain to setup, but it works really well and has saved my hide dozens of times. Some might enjoy Bacula more: https://www.bacula.org/
> maintain security? built-in firewall and DDoS protection?
I personally use Apache2 with ModSecurity and the OWASP ruleset, to act as a lightweight WAF: https://owasp.org/www-project-modsecurity-core-rule-set/
You might want to just cave in and go with Cloudflare for the most part, though: https://www.cloudflare.com/waf/
-
ONLYOFFICE
ONLYOFFICE Docs — document collaboration in your environment. Powerful document editing and collaboration in your app or environment. Ultimate security, API and 30+ ready connectors, SaaS or on-premises
-
I use Hetzner, Contabo, Time4VPS and other platforms in pretty much the same way (as IaaS VPS providers on top of which I run software, as opposed to SaaS/PaaS), but here's a quick glance at how I do things.
> deploy from source repo? Terraform?
Personally, I use Gitea for my repos and Drone CI for CI/CD.
Gitea: https://gitea.io/en-us/
Drone CI: https://www.drone.io/
Some might prefer Woodpecker due to licensing: https://woodpecker-ci.org/ but honestly most solutions out there are okay, even Jenkins.
Then I have some sort of a container cluster on the servers, so I can easily deploy things: I still like Docker Swarm (projects like CapRover might be nice to look at as well), though many might enjoy the likes of K3s or K0s more (lightweight Kubernetes clusters).
Docker Swarm: https://docs.docker.com/engine/swarm/ (uses the Compose spec for manifests)
K3s: https://k3s.io/
K0s: https://k0sproject.io/ though MicroK8s and others are also okay.
I also like having something like Portainer to have a GUI to manage the clusters: https://www.portainer.io/ for Kubernetes Rancher might offer more features, but will have a higher footprint
It even supports webhooks, so I can do a POST request at the end of a CI run and the cluster will automatically pull and launch the latest tagged version of my apps: https://docs.portainer.io/user/docker/services/webhooks
> keep software up to date? ex: Postgres, OS
I build my own base container images and rebuild them (with recent package versions) on a regular basis, which is automatically scheduled: https://blog.kronis.dev/articles/using-ubuntu-as-the-base-fo...
Drone CI makes this easy to have happen in the background, as long as I don't update across major versions, or Maven decides to release a new version and remove their old version .tar.gz archives from the downloads site for some reason, breaking my builds and making me update the URL: https://docs.drone.io/cron/
Some images like databases etc. I just proxy to my Nexus instance, version upgrades are relatively painless most of the time, at least as long as I've set up the persistent data directories correctly.
> do load balancing? built-in load balancer?
This is a bit more tricky. I use Apache2 with mod_md to get Let's Encrypt certificates and Docker Swarm networking for directing the incoming traffic across the services: https://blog.kronis.dev/tutorials/how-and-why-to-use-apache-...
Some might prefer Caddy, which is another great web server with automatic HTTPS: https://caddyserver.com/ but the Apache modules do pretty much everything I need and the performance has never actually been too bad for my needs. Up until now, applications themselves have always been the bottleneck, actually working on a blog post about comparing some web servers in real world circumstances.
However, making things a bit more failure resilient might involve just paying Hetzner (in this case) to give you a load balancer: https://www.hetzner.com/cloud/load-balancer which will make everything less painless once you need to scale.
Why? Because doing round robin DNS with the ACME certificate directory accessible and synchronized across multiple servers is a nuisance, although servers like Caddy attempt to get this working: https://caddyserver.com/docs/automatic-https#storage You could also get DNS-01 challenges working, but that needs even more work and integration with setting up TXT records. Even if you have multiple servers for resiliency, not all clients would try all of the IP addresses if one of the servers is down, although browsers should: https://webmasters.stackexchange.com/a/12704
So if you care about HTTPS certificates and want to do it yourself with multiple servers having the same hostname, you'll either need to get DNS-01 working, do some messing around with shared directories (which may or may not actually work), or will just need to get a regular commercial cert that you'd manually propagate to all of the web servers.
From there on out it should be a regular reverse proxy setup, in my case Docker Swarm takes care of the service discovery (hostnames that I can access).
> handle scaling? Terraform?
None, I manually provision how many nodes I need, mostly because I'm too broke to hand over my wallet to automation.
They have an API that you or someone else could probably hook up: https://docs.hetzner.cloud/
> automate backups? ex: databases, storage. Do you use provided backups and snapshots?
I use bind mounts for all of my containers for persistent storage, so the data is accessible on the host directly.
Then I use something like BackupPC to connect to those servers (SSH/rsync) and pull data to my own backup node, which then compresses and deduplicates the data: https://backuppc.github.io/backuppc/
It was a pain to setup, but it works really well and has saved my hide dozens of times. Some might enjoy Bacula more: https://www.bacula.org/
> maintain security? built-in firewall and DDoS protection?
I personally use Apache2 with ModSecurity and the OWASP ruleset, to act as a lightweight WAF: https://owasp.org/www-project-modsecurity-core-rule-set/
You might want to just cave in and go with Cloudflare for the most part, though: https://www.cloudflare.com/waf/
-
It's not even close to major public cloud providers, but this is my setup:
* https://github.com/kube-hetzner/terraform-hcloud-kube-hetzne... (Terraform, Kubernetes bootstrap)
* Flux for CI
* nginx-ingress + Hetzner Loadbalancer (thanks to https://github.com/hetznercloud/hcloud-cloud-controller-mana...)
* Hetzner storage volumes (thanks to https://github.com/hetznercloud/csi-driver)
Kube-Hetzner supports Hetzner Cloud loadbalancers and volumes out of the box, though it also supports other components.
-
It's not even close to major public cloud providers, but this is my setup:
* https://github.com/kube-hetzner/terraform-hcloud-kube-hetzne... (Terraform, Kubernetes bootstrap)
* Flux for CI
* nginx-ingress + Hetzner Loadbalancer (thanks to https://github.com/hetznercloud/hcloud-cloud-controller-mana...)
* Hetzner storage volumes (thanks to https://github.com/hetznercloud/csi-driver)
Kube-Hetzner supports Hetzner Cloud loadbalancers and volumes out of the box, though it also supports other components.
-
honey-swarm
Setup a full fledged portainer + Traefik swam cluster with ansible playbooks and a few VPS
I've been using docker swarm + traefik + portainer and I'm quite happy. I orchestrate everything with Ansible [1]. The only manual process I have is provisioning the servers / load balancers.
It provides a super nice balance between going all manual VPS and going all on the kubernetes cool aid
-
swarmsible
Ansible based Tooling and production grade example Docker Stacks. Updated with new learnings from running Docker Swarm in production
We use Docker Swarm for our deployments, so I will answer the questions based on that.
We have built some tooling around setting up and maintaining the swarm using ansible [0]. We also added some Hetzner flavour to that [1] which allows us to automatically spin up completely new clusters in a really short amount of time.
deploy from source repo:
- We use Azure DevOps pipelines that automate deployments based on environment configs living in an encrypted state in Git repos. We use [2] and [3] to make it easier to organize the deployments using `docker stack deploy` under the hood.
keep software up to date:
- We are currently looking into CVE scanners that export into prometheus to give us an idea of what we should update
load balancing:
- depending on the project, Hetzner LB or Cloudflare
handle scaling:
- manually, but i would love to build some autoscaler for swarm that interacts with our tooling [0] and [1]
automate backups:
- docker swarm cronjobs either via jobs with restart condition and a delay or [4]
maintain security:
- Hetzner LB is front facing. Communication is done via encrypted networks inside Hetzner private cloud networks
-
swarmsible-hetzner
Companion repository for https://github.com/neuroforgede/swarmsible with a focus on usage in the Hetzner cloud
-
-
-
we use https://github.com/costela/docker-volume-hetzner which is really stable.
CSI support for Swarm is in beta as well and already merged in the Hetzner CSI driver (https://github.com/hetznercloud/csi-driver/tree/main/deploy/...). There are some rough edges atm with Docker + CSI so I would stick with docker-volume-hetzner for now for prod usage.
Disclaimer: I contributed to both repos.
-
InfluxDB
Access the most powerful time series database as a service. Ingest, store, & analyze all types of time series data in a fully-managed, purpose-built database. Keep data forever with low-cost storage and superior data compression.