postgres-operator
cert-manager
Our great sponsors
postgres-operator | cert-manager | |
---|---|---|
33 | 101 | |
3,719 | 11,457 | |
1.9% | 1.7% | |
9.0 | 9.8 | |
7 days ago | 1 day ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
postgres-operator
- No disk space crashloop but pod healthy · Issue #3788 · CrunchyData/postgres-operator
- Deploying Postgres on Kubernetes in production
- Anyone using cloudnativepg in production?
-
Jolt v0.5.2 is available!
As for the Operators, I've been using Crunchy PGO, which is very high quality, and one of the most widely used. You can install it via Helm, or via OLM from OperatorHub. There are other good ones as well, but none that I have experience with. The only issue I've run into so far is I've had to disable TLS on the database cluster, as Prowlarr refused to connect with it for some reason (Radarr was fine). I still need to open an issue with the Prowlarr team about that, but I might switch to a service mesh for TLS anyway.
-
Can someone share experience configuring Highly Available PgSQL?
The Crunchy operator, seemingly like most (if not all) of the other Postgres operators (Zalando, KubeDB, and StackGres, etc.), is essentially a wrapper for Patroni. IMO if someone wanted a Patroni cluster, they would just build one. The point of an operator is to manage the cluster resources and node relationships, so why not have it take the role Patroni is filling here? It's already reaching into the nodes, obtaining status, managing the routing, etc., so why add the extra layer?
-
Questions about Kubernetes
On the topic of Postgres, you should look into an operator or Helm chart that can setup common things (like replication and auto-failover), such as Crunchy's Postgres operator, or consider using a "cloud-native" distributed database like CockroachDB (disclaimer: I am a Cockroach Labs employee) which has its own operator as well. Another word of warning, running stateful services, particularly mission critical databases, can require a lot of maintenance work (it's my full-time job), so unless this is for a hobby project, I would highly recommend you look into using a managed database offerring. Every major cloud provider and most database companies have one.
-
My girlfriend left me... I have a K8S cluster, argocd, longhorn, traefik, metallb, on 3 optiplex mff with proxmox... This is the start gentlemen, i'll post back in 1 year. This dashboard will be full my friends, I promise, see you in the rabbit hole o/
For postgres you can also have a look at PGO or bitnami helm chart
-
Databases on Kubernetes is fundamentally same as a database on a VM
Let's say a new Kubernetes version comes out in April. In November, as everything works perfectly well, you decide to install a Postgres operator on it. Bummer, it doesn't work. It's not a huge issue, you just wait until the bug is resolved (already done[0]), but it's just one of these tiny things that I don't get when running Postrges natively. And I'm saying this as a big fan of Crunchy Data running some production loads on it without a failure for quite some time now.
[0] https://github.com/CrunchyData/postgres-operator/issues/3476
-
Are you running databases on Kubernetes?
There is one particular client that have a somewhat big database 40-120gb (it change size over the year), and for that we used CrunchyData Postgres operator ( https://access.crunchydata.com/documentation/postgres-operator/v5/ ) we have no commercial relation with them, but oboi let me tell you the god send that thing is, this database in specific process massive data and it is distributed between several nodes in a read-write and read-only set, and let me tell you, it is amazing how easy it is to move things around, take backups, increase the capacity and a bunch of other goodies that operator bring. Give it a try.
- Do people use DBs as Pods?
cert-manager
-
deploying a minio service to kubernetes
cert-manager
-
Upgrading Hundreds of Kubernetes Clusters
The second one is a combination of tools: External DNS, cert-manager, and NGINX ingress. Using these as a stack, you can quickly deploy an application, making it available through a DNS with a TLS without much effort via simple annotations. When I first discovered External DNS, I was amazed at its quality.
-
Run WebAssembly on DigitalOcean Kubernetes with SpinKube - In 4 Easy Steps
On top of its core components, SpinKube depends on cert-manager. cert-Manager is responsible for provisioning and managing TLS certificates that are used by the admission webhook system of the Spin Operator. Let’s install cert-manager and KWasm using the commands shown here:
-
Importing kubernetes manifests with terraform for cert-manager
terraform { required_providers { kubectl = { source = "gavinbunney/kubectl" version = "1.14.0" } } } # The reference to the current project or a AWS project data "google_client_config" "provider" {} # The reference to the current cluster or EKS data "google_container_cluster" "my_cluster" { name = var.cluster_name location = var.cluster_location } # We configure the kubectl provider to use those values for authenticating provider "kubectl" { host = data.google_container_cluster.my_cluster.endpoint token = data.google_client_config.provider.access_token cluster_ca_certificate = base64decode(data.google_container_cluster.my_cluster.master_auth[0].cluster_ca_certificate) } #Download the multiple manifests file. data "http" "cert_manager_crds" { url = "https://github.com/cert-manager/cert-manager/releases/download/v${var.cert_manager_version}/cert-manager.crds.yaml" } data "kubectl_file_documents" "cert_manager_crds" { content = data.http.cert_manager_crds.response_body lifecycle { precondition { condition = 200 == data.http.cert_manager_crds.status_code error_message = "Status code invalid" } } } # We use the for_each or else this kubectl_manifest will only import the first manifest in the file. resource "kubectl_manifest" "cert_manager_crds" { for_each = data.kubectl_file_documents.cert_manager_crds.manifests yaml_body = each.value }
-
An opinionated template for deploying a single k3s cluster with Ansible backed by Flux, SOPS, GitHub Actions, Renovate, Cilium, Cloudflare and more!
SSL certificates thanks to Cloudflare and cert-manager
-
Deploy Rancher on AWS EKS using Terraform & Helm Charts
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/${CERT_MANAGER_VERSION}/cert-manager.crds.yaml
-
Setup/Design internal PKI
put the Sub-CA inside hashicorp vault to be used for automatic signing of services like https://cert-manager.io/ inside our k8s clusters.
-
Task vs Make - Final Thoughts
install-cert-manager: desc: Install cert-manager deps: - init-cluster cmds: - kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/{{.CERT_MANAGER_VERSION}}/cert-manager.yaml - echo "Waiting for cert-manager to be ready" && sleep 25 status: - kubectl -n cert-manager get pods | grep Running | wc -l | grep -q 3
-
Easy HTTPS for your private networks
I've been pretty frustrated with how private CAs are supported. Your private root CA can be maliciously used to MITM every domain on the Internet, even though you intend to use it for only a couple domain names. Most people forget to set Name Constraints when they create these and many helper tools lack support [1][2]. Worse, browser support for Name Constraints has been slow [3] and support isn't well tracked [4]. Public CAs give you certificate transparency and you can subscribe to events to detect mis-issuance. Some hosted private CAs like AWS's offer logs [5], but DIY setups don't.
Even still, there are a lot of folks happily using private CAs, they aren't the target audience for this initial release.
[1] https://github.com/FiloSottile/mkcert/issues/302
[2] https://github.com/cert-manager/cert-manager/issues/3655
[3] https://alexsci.com/blog/name-non-constraint/
[4] https://github.com/Netflix/bettertls/issues/19
[5] https://docs.aws.amazon.com/privateca/latest/userguide/secur...
-
☸️ Managed Kubernetes : Our dev is on AWS, our prod is on OVH
the Cert Manager
What are some alternatives?
kubegres - Kubegres is a Kubernetes operator allowing to deploy one or many clusters of PostgreSql instances and manage databases replication, failover and backup.
metallb - A network load-balancer implementation for Kubernetes using standard routing protocols
postgres-operator - Postgres operator creates and manages PostgreSQL clusters running in Kubernetes
aws-load-balancer-controller - A Kubernetes controller for Elastic Load Balancers
longhorn - Cloud-Native distributed storage built on and for Kubernetes
Portainer - Making Docker and Kubernetes management easy.
postgres-operator - Production PostgreSQL for Kubernetes, from high availability Postgres clusters to full-scale database-as-a-service.
awx-operator - An Ansible AWX operator for Kubernetes built with Operator SDK and Ansible. 🤖
cloudnative-pg - CloudNativePG is a comprehensive platform designed to seamlessly manage PostgreSQL databases within Kubernetes environments, covering the entire operational lifecycle from initial deployment to ongoing maintenance
k3s - Lightweight Kubernetes
oauth2-proxy - A reverse proxy that provides authentication with Google, Azure, OpenID Connect and many more identity providers.