postgres-operator
charts
Our great sponsors
postgres-operator | charts | |
---|---|---|
33 | 88 | |
3,719 | 8,391 | |
1.9% | 2.5% | |
9.0 | 10.0 | |
8 days ago | 5 days ago | |
Go | Smarty | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
postgres-operator
- No disk space crashloop but pod healthy · Issue #3788 · CrunchyData/postgres-operator
- Deploying Postgres on Kubernetes in production
- Anyone using cloudnativepg in production?
-
Jolt v0.5.2 is available!
As for the Operators, I've been using Crunchy PGO, which is very high quality, and one of the most widely used. You can install it via Helm, or via OLM from OperatorHub. There are other good ones as well, but none that I have experience with. The only issue I've run into so far is I've had to disable TLS on the database cluster, as Prowlarr refused to connect with it for some reason (Radarr was fine). I still need to open an issue with the Prowlarr team about that, but I might switch to a service mesh for TLS anyway.
-
Can someone share experience configuring Highly Available PgSQL?
The Crunchy operator, seemingly like most (if not all) of the other Postgres operators (Zalando, KubeDB, and StackGres, etc.), is essentially a wrapper for Patroni. IMO if someone wanted a Patroni cluster, they would just build one. The point of an operator is to manage the cluster resources and node relationships, so why not have it take the role Patroni is filling here? It's already reaching into the nodes, obtaining status, managing the routing, etc., so why add the extra layer?
-
Questions about Kubernetes
On the topic of Postgres, you should look into an operator or Helm chart that can setup common things (like replication and auto-failover), such as Crunchy's Postgres operator, or consider using a "cloud-native" distributed database like CockroachDB (disclaimer: I am a Cockroach Labs employee) which has its own operator as well. Another word of warning, running stateful services, particularly mission critical databases, can require a lot of maintenance work (it's my full-time job), so unless this is for a hobby project, I would highly recommend you look into using a managed database offerring. Every major cloud provider and most database companies have one.
-
My girlfriend left me... I have a K8S cluster, argocd, longhorn, traefik, metallb, on 3 optiplex mff with proxmox... This is the start gentlemen, i'll post back in 1 year. This dashboard will be full my friends, I promise, see you in the rabbit hole o/
For postgres you can also have a look at PGO or bitnami helm chart
-
Databases on Kubernetes is fundamentally same as a database on a VM
Let's say a new Kubernetes version comes out in April. In November, as everything works perfectly well, you decide to install a Postgres operator on it. Bummer, it doesn't work. It's not a huge issue, you just wait until the bug is resolved (already done[0]), but it's just one of these tiny things that I don't get when running Postrges natively. And I'm saying this as a big fan of Crunchy Data running some production loads on it without a failure for quite some time now.
[0] https://github.com/CrunchyData/postgres-operator/issues/3476
-
Are you running databases on Kubernetes?
There is one particular client that have a somewhat big database 40-120gb (it change size over the year), and for that we used CrunchyData Postgres operator ( https://access.crunchydata.com/documentation/postgres-operator/v5/ ) we have no commercial relation with them, but oboi let me tell you the god send that thing is, this database in specific process massive data and it is distributed between several nodes in a read-write and read-only set, and let me tell you, it is amazing how easy it is to move things around, take backups, increase the capacity and a bunch of other goodies that operator bring. Give it a try.
- Do people use DBs as Pods?
charts
-
Coexistence of containers and Helm charts - OCI based registries
Both of these examples seem pretty obvious and something you wouldn’t mess up, but as your chart grows, so does your values.yaml file. A great example is the Redis chart by Bitnami. I encourage you to scroll through its values file. See you in a minute!
-
How to deploy and manage a RabbitMQ cluster on Amazon EKS using Terraform and Helm
We will write a Terraform module that will take a list of configurations for each required RabbitMQ instance. Luckily for us, we don't have to write the Kubernetes yaml configurations since the helm charts by Bitnami does a great job of doing all the things we discussed above. All we need to do is leverage Terraform Helm Provider and deploy the chart with the required values for our use case.
-
Master Helm, Chart the Kubernetes Seas 🌊🧭🏴☠️
💡 The full details of helm charts can be referenced in their associated GitHub Repository.
-
Bitnami Kibana dashboard import
I have a configmap with the ndjson set up under data:, similar to https://github.com/bitnami/charts/issues/6159 and it's subsequent answer.
-
Deploy Kubernetes Helm Charts in Minutes
This way, you can easily deploy any Helm charts from this public repo - https://github.com/bitnami/charts/tree/main/bitnami in just minutes.
- [Kubernetes] Comment déployez-vous un cluster Postgres sur Kubernetes en 2022?
-
Is there any tutorial, blog post that shows you how to use the bitnami-mysql helm chart?
The Bitnami Github Pages themselves usually cover everything you need to know. Configure a values.yaml file, or modify that to your liking, and you run helm install, as written in their docs.
-
Dynamic Volume Provisioning in Kubernetes with AWS and Terraform
The actual reason that our pods are not coming up is found when we review the helm installation that we are trying to run. If you check the dependencies in the GitHub repository (https://github.com/bitnami/charts/blob/main/bitnami/drupal/values.yaml) you find out that persistent storage is enabled by default and set to 8Gi. Also, the helm package uses MariaDB and the database size is specified to a default of 8Gi, thus setting the minimum storage for this installation to be 16Gi.
-
Experience setting up Spark and Hudi on Kubernetes
We're using https://github.com/bitnami/charts/tree/main/bitnami/spark, but I have heard good things about https://github.com/GoogleCloudPlatform/spark-on-k8s-operator as well. Hudi should not need any long running deployments as per the docs https://hudi.apache.org/docs/0.5.1/deployment/#deploying
-
"helm crearte" command for bitnami charts/common Library?
Bitnami has its own scaffolding published at https://github.com/bitnami/charts/tree/main/template
What are some alternatives?
kubegres - Kubegres is a Kubernetes operator allowing to deploy one or many clusters of PostgreSql instances and manage databases replication, failover and backup.
helm-charts - A curated set of Helm charts brought to you by codecentric
postgres-operator - Postgres operator creates and manages PostgreSQL clusters running in Kubernetes
oauth2-proxy - A reverse proxy that provides authentication with Google, Azure, OpenID Connect and many more identity providers.
longhorn - Cloud-Native distributed storage built on and for Kubernetes
renovate - Universal dependency automation tool.
postgres-operator - Production PostgreSQL for Kubernetes, from high availability Postgres clusters to full-scale database-as-a-service.
promscale - [DEPRECATED] Promscale is a unified metric and trace observability backend for Prometheus, Jaeger and OpenTelemetry built on PostgreSQL and TimescaleDB.
cloudnative-pg - CloudNativePG is a comprehensive platform designed to seamlessly manage PostgreSQL databases within Kubernetes environments, covering the entire operational lifecycle from initial deployment to ongoing maintenance
kube-thanos - Kubernetes specific configuration for deploying Thanos.
k3s - Lightweight Kubernetes