kubernetes-replicator
docker-compose-stack
kubernetes-replicator | docker-compose-stack | |
---|---|---|
3 | 4 | |
805 | 7 | |
1.9% | - | |
6.2 | 3.5 | |
18 days ago | about 1 month ago | |
Go | Shell | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kubernetes-replicator
-
What if your Pods need to trust self-signed certificates?
I've built a small MutatingAdmissionWebhook controller [0] that handles this, via a pod annotation whose value is a secret with `ca.crt` inside, and it uses the (mostly) de facto standard openssl variables to configure the libraries, so that it works across pretty much everything I've tried it with off the shelf.
I build a bundle (though I may just move to trust-manager [1]) and replicate it into all namespaces with kubernetes-replicator [2], and then I can annotate any pod with
[0] https://github.com/microcumulus/ca-injector
[1] https://github.com/cert-manager/trust-manager
[2] https://github.com/mittwald/kubernetes-replicator
-
To anyone hosting in Kubernetes: Do you put all of your apps in one namespace (e.g., default), or one app per namespace?
Whichever way you go, I’ve successfully used this to replicate secrets: https://github.com/mittwald/kubernetes-replicator
- GitHub - mittwald/kubernetes-replicator: Kubernetes controller for synchronizing secrets & config maps across namespaces
docker-compose-stack
-
What if your Pods need to trust self-signed certificates?
You're right, it's a little weird. I wrote a short essay about my setup[1] but the tl;dr is that I wanted certificates distributed in the same way every other thing on my machines is distributed.
I wanted my homeprod setup to be as hands off as possible while still allowing easy management. Each physical host is running Alpine. During provisioning I install docker, Tailscale, and manually start a "root" container that runs[2] docker compose and then starts a cron daemon. The compose commands include one or more "stack" files and are generated based on a yaml file listing the stacks for each host. Watchtower runs with a 30 second cycle time to keep everything updated, including the root container. Adding or updating services means committing and pushing a change to the root container repo, then CI builds and pushes a new image. Watchtower picks up the new image and restarts the root container, which re-runs Compose which in turn starts, stops, modifies, etc anything that's changed.
For certificates, I tried a number of different things but ultimately settled on the method I described earlier. The purpose of the container image is to 1) transport the certificates and install them in the right spot and 2) be updatable automatically with Watchtower.
Certificate changes are very similar to the root container, except the git repo self-modifies upon renewals (yes I keep private keys committed to git, it's a homelab, it's really not a big deal).
[1]: https://www.petekeen.net/homeprod-management-with-docker
[2]: https://github.com/peterkeen/docker-compose-stack/blob/main/...
-
Old School Linux Administration (My Next Homelab Generation)
I really like this article just for the straightforwardness of the setup. Pets not cattle should be the home server mantra.
My setup is not quite as simple. I have one homeprod server running Proxmox with a number of single task VMs and LXCs. A task, for my purposes, is a set of one or more related services. So I have an internal proxy VM that also runs my dashboard. I have a media VM that runs the *arrs. I have an LXC that runs Jellyfin (GPU pass through is easier with LXC). A VM running Home Assistant OS. Etcetera.
Most of these VMs are running Docker on top of Alpine and a silly container management scheme I've cooked up[1]. I've found this setup really easy to wrap my head around, vs docker swarm or k8s or what have you. I'm even in the process of stripping dokku out of my stack in favor of this setup.
[1]: https://github.com/peterkeen/docker-compose-stack
- docker-compose-stack: a fun little zero-ish dependency docker compose continuous deployment tool
- What is everyone using to deploy Docker?
What are some alternatives?
KubernetesCRDOperator - A sample about Kubernetes controller which can work with CRD to implement Operator pattern.
CasaOS - CasaOS - A simple, easy-to-use, elegant open-source Personal Cloud system.
aws-cloud-map-mcs-controller-for-k8s - K8s controller implementing Multi-Cluster Services API based on AWS Cloud Map.
trust-manager - trust-manager is an operator for distributing trust bundles across a Kubernetes cluster.
secrets-manager - A daemon to sync Vault secrets to Kubernetes secrets
docker_installs - Docker and Docker-Compose install scripts for various linux distros and versions
kubed - 🛡️ Kubernetes Config Syncer (previously kubed) [Moved to: https://github.com/kubeops/config-syncer]
ca-injector - Painlessly use off-the-shelf images (and your own) in your k8s cluster, with custom root CAs.
config-syncer - 🛡️ Kubernetes Config Syncer (previously kubed)
sealed-secrets - A Kubernetes controller and tool for one-way encrypted Secrets
k8tz - Kubernetes admission controller and a CLI tool to inject timezones into Pods and CronJobs
kube-httpcache - Varnish Reverse Proxy on Kubernetes