helm-charts
autoscaler
helm-charts | autoscaler | |
---|---|---|
14 | 89 | |
1,807 | 7,652 | |
- | 1.1% | |
0.0 | 9.7 | |
about 1 year ago | 5 days ago | |
Python | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
helm-charts
-
☸️ Web Application on Kubernetes: A Tutorial to Observability with the Elastic Stack
Helm installed on your machine. The Elastic modules will be installed using the official Elastic Helm repo. Although the repo is now read-only, it is still fully functional, and the community is expected to maintain it.
-
☸️ Managed Kubernetes : Our dev is on AWS, our prod is on OVH
The Elastic Stack is our Swiss Army Knife on the cluster. The whole Elastic Stack has been installed in version 8.5.1 inside the cluster. We know it's not recommended to have stateful apps in Kubernetes, but we are in the early stage of our production.
-
Loading Kibana dashboards using Metricbeat through HELM charts
Hi, I am looking to load the default dashboards that come pre-built in Kibana by setting up a Kibana endpoint in metricbeat configuration. The "setup.kibana" option is not really available in the official metricbeat helm charts as a setting (https://github.com/elastic/helm-charts/blob/main/metricbeat/values.yaml). The option to setup a kibana endpoint is only available in the regular metricbeat.yml file which we generally use in a VM based deployment. (https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-reference-yml.html).
-
how do I expose the ES/Kibana created by my ECK operator on K8s?
apiVersion: networking.k8s.io/v1 kind: Ingress #https://kubernetes.io/docs/concepts/services-networking/ingress/ metadata: name: quickstart-es-http-ingress annotations: # nginx.ingress.kubernetes.io/rewrite-target: / #https://github.com/elastic/helm-charts/issues/779 https://stackoverflow.com/questions/68893838/ingress-for-eck-elasticsearch-not-working-502-gateway #kubernetes.io/ingress.class: nginx #think this is wrong for our class? kubernetes.io/tls-acme: "true" #nginx.ingress.kubernetes.io/proxy-ssl-secret: "resources/elastic-certificate-pem" #=> need to point to ES certificate pem. nginx.ingress.kubernetes.io/proxy-ssl-verify: "false" #=> must be false if you use elasticsearch-utils to generate CA. nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" #=> must be HTTPS <- this one fixed it. spec: ingressClassName: public rules: - http: paths: - path: / pathType: Prefix backend: service: name: quickstart-es-http port: number: 9200
-
Deploy Elasticsearch 8.5 on Kubernetes with Okteto Cloud free plan
Unfortunately the new security system introduced by ES 8.0 has produced problems with the official helm chart, so we cannot use the standard Okteto Chart deploy system. In this article we will see how deploy ES 8.x into kubernetes (k8s) using the Okteto Cloud as platform.
-
Architecture for Logstash and how to deploy
This might help you, you'd need to run this separately from the operator: https://github.com/elastic/helm-charts/blob/main/logstash/ Before embarking on a logstash journey you may want to check if beats (or agent) combined with Elasticsearch ingestion pipelines can meet your needs
-
Kubernetes Logging in Production
We will deploy the Elasticsearch and Kibana using the official Helm charts which can be found here(Elasticsearch, Kibana). For installing via Helm you would need a helm binary on your path but installation of Helm is outside the scope of this post.
- Passing annotations for helm resource isn't working as expected.
-
Question regarding ElasticSearch
They have the repo with their helm charts published on github. Here's the values.yaml for elastic. The other apps (kibana, filebeat, etc) are in adjacent folders. The values you see in these files are the defaults.
-
elasticsearch installation in helm fails with statefulset error
Looks like they have recently changed node.roles https://github.com/elastic/helm-charts/pull/1186
autoscaler
-
Upgrading Hundreds of Kubernetes Clusters
We use Cluster Autoscaler to automatically adjust the number of nodes (cluster size) based on your actual usage to ensure efficiency. Additionally, we deploy Vertical and Horizontal Pod Autoscalers to scale your applications' resources as their needs change automatically.
-
Not Everything Is Google's Fault (Just Most Things)
> * Hetzner: cheap, good service, the finest pets in the world, no cattle
You can absolutely do cattle with Hetzner. They support imaging and immutable infrastructure. They don't have a native auto scaling equivalent, but if you're using Kubernetes, they have a cluster autoscaler: https://github.com/kubernetes/autoscaler/blob/master/cluster...
-
Kubernetes(K8s) Autoscaler — a detailed look at the design and implementation of VPA
Here we take the VPA as a starting point to analyze the design and implementation principles of the VPA in Autoscaler. The source code for this article is based on Autoscaler HEAD fbe25e1.
- Scaling with Karpenter and Empty Pod(A.k.a Overprovisioning)
-
Reducing Cloud Costs on Kubernetes Dev Envs
Autoscaling over EKS can be accomplished using either the cluster-autoscaler project or Karpenter. If you want to use Spot instances, consider using Karpenter, as it has better integrations with AWS for optimizing spot pricing and availability, minimizing interruptions, and falling back to on-demand nodes if no spot instances are available.
-
☸️ Managed Kubernetes : Our dev is on AWS, our prod is on OVH
Autoscaling is already provided on OVH, but we don't use it for now. Autoscaler has to be manually installed on the AWS/EKS cluster.
-
relevant way of scaling pods
do you mean this: https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/pkg/recommender/README.md
-
Kubernetes Cluster Maintenance
Read more about this scaler in detail here!
-
Anyone running Windows nodes in your clusters?
We have a default node group of Linux hosts, but there's a secondary nodegroup of Windows hosts that is typically scaled down to 0. When a team's build runs, a pod is scheduled based on their definition. Cluster-autoscaler will check the nodeSelector and automatically spin up a node from that nodegroup if necessary.
-
How to make sure Kubernetes autoscaler not deleting the nodes which runs specific pod
I am running a Kubernetes cluster(AWS EKS one) with Autoscaler pod So that Cluster will autoscale according to the resource request within the cluster.
What are some alternatives?
go-getting-started - Develop Go Apps in Kubernetes with Okteto
karpenter-provider-aws - Karpenter is a Kubernetes Node Autoscaler built for flexibility, performance, and simplicity.
Elasticsearch - Free and Open, Distributed, RESTful Search Engine
cluster-proportional-autoscaler - Kubernetes Cluster Proportional Autoscaler Container
cert-manager - Automatically provision and manage TLS certificates in Kubernetes
aws-ebs-csi-driver - CSI driver for Amazon EBS https://aws.amazon.com/ebs/
public-cloud-roadmap - Agile roadmap for OVHcloud Public Cloud services. Discover the features our product teams are working on, comment and influence our backlog.
keda - KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
charts - ⚠️(OBSOLETE) Curated applications for Kubernetes
descheduler - Descheduler for Kubernetes
elastic-certified-engineer - Playground zone to prepare the Elasticsearch engineer exam
k3s-aws-terraform-cluster - Deploy an high available K3s cluster on Amazon AWS