What if your Pods need to trust self-signed certificates?

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • trust-manager

    trust-manager is an operator for distributing trust bundles across a Kubernetes cluster.

  • Plug (but it's open source and free!): We've been trying to address this in Kubernetes with trust-manager. [1] Trust bundles need to be a runtime concern and they need to support trusting both the old a new version of a cert to safely allow for rotation. It's pretty simple but it seems to work well!

    trust-manager also supports pulling in the Mozilla trust bundle which most Linux distros (and therefore most containers) use!

    Handling trust of private [2] certificates is done poorly generally across many orgs and platforms, not just Kubernetes. There are lots of ways of shooting yourself in the foot - particularly when it comes to rotating CA certificates. I think there's a lot of space here for new solutions here!

    [1] https://cert-manager.io/docs/projects/trust-manager/

    [2] I try to avoid "self-signed" in this use case because its literal meaning is that the certificate signs itself using its own key, which is what root certificates do. The Let's Encrypt ISRG X1 root certificate is self-signed but it's definitely not what I'd call a 'private CA'; see https://letsencrypt.org/certificates/

  • ca-injector

    Painlessly use off-the-shelf images (and your own) in your k8s cluster, with custom root CAs.

  • I've built a small MutatingAdmissionWebhook controller [0] that handles this, via a pod annotation whose value is a secret with `ca.crt` inside, and it uses the (mostly) de facto standard openssl variables to configure the libraries, so that it works across pretty much everything I've tried it with off the shelf.

    I build a bundle (though I may just move to trust-manager [1]) and replicate it into all namespaces with kubernetes-replicator [2], and then I can annotate any pod with

    [0] https://github.com/microcumulus/ca-injector

    [1] https://github.com/cert-manager/trust-manager

    [2] https://github.com/mittwald/kubernetes-replicator

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • kubernetes-replicator

    Kubernetes controller for synchronizing secrets & config maps across namespaces

  • I've built a small MutatingAdmissionWebhook controller [0] that handles this, via a pod annotation whose value is a secret with `ca.crt` inside, and it uses the (mostly) de facto standard openssl variables to configure the libraries, so that it works across pretty much everything I've tried it with off the shelf.

    I build a bundle (though I may just move to trust-manager [1]) and replicate it into all namespaces with kubernetes-replicator [2], and then I can annotate any pod with

    [0] https://github.com/microcumulus/ca-injector

    [1] https://github.com/cert-manager/trust-manager

    [2] https://github.com/mittwald/kubernetes-replicator

  • docker-compose-stack

    Use docker-compose and watchtower to self-deploy and auto-update a stack

  • You're right, it's a little weird. I wrote a short essay about my setup[1] but the tl;dr is that I wanted certificates distributed in the same way every other thing on my machines is distributed.

    I wanted my homeprod setup to be as hands off as possible while still allowing easy management. Each physical host is running Alpine. During provisioning I install docker, Tailscale, and manually start a "root" container that runs[2] docker compose and then starts a cron daemon. The compose commands include one or more "stack" files and are generated based on a yaml file listing the stacks for each host. Watchtower runs with a 30 second cycle time to keep everything updated, including the root container. Adding or updating services means committing and pushing a change to the root container repo, then CI builds and pushes a new image. Watchtower picks up the new image and restarts the root container, which re-runs Compose which in turn starts, stops, modifies, etc anything that's changed.

    For certificates, I tried a number of different things but ultimately settled on the method I described earlier. The purpose of the container image is to 1) transport the certificates and install them in the right spot and 2) be updatable automatically with Watchtower.

    Certificate changes are very similar to the root container, except the git repo self-modifies upon renewals (yes I keep private keys committed to git, it's a homelab, it's really not a big deal).

    [1]: https://www.petekeen.net/homeprod-management-with-docker

    [2]: https://github.com/peterkeen/docker-compose-stack/blob/main/...

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts