cert-manager VS metrics-server

Compare cert-manager vs metrics-server and see what are their differences.

cert-manager

Automatically provision and manage TLS certificates in Kubernetes (by cert-manager)

metrics-server

Scalable and efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines. (by kubernetes-sigs)
WorkOS - The modern identity platform for B2B SaaS
The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
workos.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
cert-manager metrics-server
101 40
11,457 5,426
1.7% 2.4%
9.8 8.6
6 days ago 5 days ago
Go Go
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

cert-manager

Posts with mentions or reviews of cert-manager. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-08.
  • deploying a minio service to kubernetes
    3 projects | dev.to | 8 Apr 2024
    cert-manager
  • Upgrading Hundreds of Kubernetes Clusters
    17 projects | dev.to | 3 Apr 2024
    The second one is a combination of tools: External DNS, cert-manager, and NGINX ingress. Using these as a stack, you can quickly deploy an application, making it available through a DNS with a TLS without much effort via simple annotations. When I first discovered External DNS, I was amazed at its quality.
  • Run WebAssembly on DigitalOcean Kubernetes with SpinKube - In 4 Easy Steps
    6 projects | dev.to | 27 Mar 2024
    On top of its core components, SpinKube depends on cert-manager. cert-Manager is responsible for provisioning and managing TLS certificates that are used by the admission webhook system of the Spin Operator. Let’s install cert-manager and KWasm using the commands shown here:
  • Importing kubernetes manifests with terraform for cert-manager
    1 project | dev.to | 17 Jan 2024
    terraform { required_providers { kubectl = { source = "gavinbunney/kubectl" version = "1.14.0" } } } # The reference to the current project or a AWS project data "google_client_config" "provider" {} # The reference to the current cluster or EKS data "google_container_cluster" "my_cluster" { name = var.cluster_name location = var.cluster_location } # We configure the kubectl provider to use those values for authenticating provider "kubectl" { host = data.google_container_cluster.my_cluster.endpoint token = data.google_client_config.provider.access_token cluster_ca_certificate = base64decode(data.google_container_cluster.my_cluster.master_auth[0].cluster_ca_certificate) } #Download the multiple manifests file. data "http" "cert_manager_crds" { url = "https://github.com/cert-manager/cert-manager/releases/download/v${var.cert_manager_version}/cert-manager.crds.yaml" } data "kubectl_file_documents" "cert_manager_crds" { content = data.http.cert_manager_crds.response_body lifecycle { precondition { condition = 200 == data.http.cert_manager_crds.status_code error_message = "Status code invalid" } } } # We use the for_each or else this kubectl_manifest will only import the first manifest in the file. resource "kubectl_manifest" "cert_manager_crds" { for_each = data.kubectl_file_documents.cert_manager_crds.manifests yaml_body = each.value }
  • An opinionated template for deploying a single k3s cluster with Ansible backed by Flux, SOPS, GitHub Actions, Renovate, Cilium, Cloudflare and more!
    6 projects | /r/kubernetes | 4 Dec 2023
    SSL certificates thanks to Cloudflare and cert-manager
  • Deploy Rancher on AWS EKS using Terraform & Helm Charts
    3 projects | dev.to | 14 Nov 2023
    kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/${CERT_MANAGER_VERSION}/cert-manager.crds.yaml
  • Setup/Design internal PKI
    1 project | /r/sysadmin | 4 Nov 2023
    put the Sub-CA inside hashicorp vault to be used for automatic signing of services like https://cert-manager.io/ inside our k8s clusters.
  • Task vs Make - Final Thoughts
    3 projects | dev.to | 10 Aug 2023
    install-cert-manager: desc: Install cert-manager deps: - init-cluster cmds: - kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/{{.CERT_MANAGER_VERSION}}/cert-manager.yaml - echo "Waiting for cert-manager to be ready" && sleep 25 status: - kubectl -n cert-manager get pods | grep Running | wc -l | grep -q 3
  • Easy HTTPS for your private networks
    13 projects | news.ycombinator.com | 10 Jul 2023
    I've been pretty frustrated with how private CAs are supported. Your private root CA can be maliciously used to MITM every domain on the Internet, even though you intend to use it for only a couple domain names. Most people forget to set Name Constraints when they create these and many helper tools lack support [1][2]. Worse, browser support for Name Constraints has been slow [3] and support isn't well tracked [4]. Public CAs give you certificate transparency and you can subscribe to events to detect mis-issuance. Some hosted private CAs like AWS's offer logs [5], but DIY setups don't.

    Even still, there are a lot of folks happily using private CAs, they aren't the target audience for this initial release.

    [1] https://github.com/FiloSottile/mkcert/issues/302

    [2] https://github.com/cert-manager/cert-manager/issues/3655

    [3] https://alexsci.com/blog/name-non-constraint/

    [4] https://github.com/Netflix/bettertls/issues/19

    [5] https://docs.aws.amazon.com/privateca/latest/userguide/secur...

  • ☸️ Managed Kubernetes : Our dev is on AWS, our prod is on OVH
    6 projects | dev.to | 1 Jul 2023
    the Cert Manager

metrics-server

Posts with mentions or reviews of metrics-server. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-03.
  • Upgrading Hundreds of Kubernetes Clusters
    17 projects | dev.to | 3 Apr 2024
    The last one is mostly an observability stack with Prometheus, Metric server, and Prometheus adapter to have excellent insights into what is happening on the cluster. You can reuse the same stack for autoscaling by repurposing all the data collected for monitoring.
  • Deploy Secure Spring Boot Microservices on Amazon EKS Using Terraform and Kubernetes
    13 projects | dev.to | 23 Nov 2023
    and the Metrics Server.
  • ☸️ Managed Kubernetes : Our dev is on AWS, our prod is on OVH
    6 projects | dev.to | 1 Jul 2023
    Metrics-server is installed by default on OVH, and has to be installed manually on AWS/EKS cluster.
  • Kubernetes HPA on AKS is failing with error 'missing request for cpu'
    1 project | /r/codehunter | 8 Jun 2023
    I have also installed metrics-server (though not sure whether that was required or not) using the following statement:kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
  • Factors affecting performance of job running in a pod
    1 project | /r/kubernetes | 5 Jun 2023
    For #2: There are MANY solutions but the simplest and easiest is https://github.com/kubernetes-sigs/metrics-server and then you can use kubectl top to view pods resource usage. If you want fancy graphs, long retention, alerting, analysis, etc. of resource metrics.. then you need to use something like new relic, data dog, or something open source but much more complex like prometheus/thanos/grafana.. or a managed SaaS version of those open source tools... also some k8s platforms like AWS, Spot.io, GKE, OpenShift, may have this stuff built in or turnkey
  • Autoscaling Ingress controllers in Kubernetes
    4 projects | dev.to | 17 Apr 2023
    It's an autoscaler with a metrics server (so I don't need to install 2 different tools).
  • Checking the Capability of ChatGPT for DevOps Automation Solution Design
    1 project | dev.to | 9 Apr 2023
    Sure, here's an example of how to configure the Kubernetes Metrics Server along with the Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA) configurations we provided earlier. The Metrics Server is a Kubernetes add-on that provides resource utilization metrics for pods and nodes, which are used by the HPA and VPA to determine when and how to scale an application. First, you'll need to install the Metrics Server into your Kubernetes cluster. You can do this by running the following command: kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml Once the Metrics Server is installed, you can configure your HPA and VPA to use it for metrics by adding the following annotation to your deployment: apiVersion: apps/v1 kind: Deployment metadata: name: my-app annotations: autoscaling.alpha.kubernetes.io/metrics-provider: "metrics-server" With this annotation in place, the HPA and VPA will use the Metrics Server to obtain CPU and memory utilization metrics for your pods and nodes. Here's an updated example of the HPA configuration we provided earlier, with the addition of the Metrics Server configuration: apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: my-app-hpa spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: my-app minReplicas: 1 maxReplicas: 10 metrics: - type: Resource resource: name: cpu targetAverageUtilization: 50 - type: Resource resource: name: memory targetAverageUtilization: 80 In this example, we've added a second metric to the HPA configuration that targets memory utilization at 80%. Here's an updated example of the VPA configuration we provided earlier, with the addition of the Metrics Server configuration: apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: my-app-vpa spec: targetRef: apiVersion: apps/v1 kind: Deployment name: my-app updatePolicy: updateMode: "Off" resourcePolicy: containerPolicies: - containerName: "*" minAllowed: cpu: 50m memory: 256Mi maxAllowed: cpu: 500m memory: 1Gi metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 50 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 80 In this example, we've added two metrics to the VPA configuration that target CPU and memory utilization, with target average utilization of 50% and 80% respectively. I hope this helps you configure the Metrics Server, HPA, and VPA for your application in Kubernetes!
  • plz help
    1 project | /r/kubernetes | 26 Feb 2023
    Id go for k3s then install metrics-server, then you can deploy some hpa’s
  • Autoscaling Nodes in Kubernetes
    3 projects | dev.to | 31 Dec 2022
    # Create EKS Cluster with version 1.23 eksctl create cluster -f eks-cluster.yaml # Output like below shows cluster has been successfully created 2022-12-30 16:26:46 [ℹ] kubectl command should work with "/home/ec2-user/.kube/config", try 'kubectl get nodes' 2022-12-30 16:26:46 [✔] EKS cluster "ca-demo" in "us-west-2" region is ready # Deploy the Metric server kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml # Output of the above command looks something like below - serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
  • Korifi : API Cloud Foundry V3 expérimentale dans Kubernetes …
    7 projects | dev.to | 25 Dec 2022
    ubuntu@korifi:~$ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.2/components.yaml serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created ubuntu@korifi:~$ kubectl get po,svc -A NAMESPACE NAME READY STATUS RESTARTS AGE cert-manager pod/cert-manager-74d949c895-w6gzm 1/1 Running 0 13m cert-manager pod/cert-manager-cainjector-d9bc5979d-jhr9m 1/1 Running 0 13m cert-manager pod/cert-manager-webhook-84b7ddd796-xw878 1/1 Running 0 13m kpack pod/kpack-controller-84cbbcdff6-nnhdn 1/1 Running 0 9m40s kpack pod/kpack-webhook-56c6b59c4-9zvlb 1/1 Running 0 9m40s kube-system pod/coredns-565d847f94-kst2l 1/1 Running 0 31m kube-system pod/coredns-565d847f94-rv8pn 1/1 Running 0 31m kube-system pod/etcd-kind-control-plane 1/1 Running 0 32m kube-system pod/kindnet-275pd 1/1 Running 0 31m kube-system pod/kube-apiserver-kind-control-plane 1/1 Running 0 32m kube-system pod/kube-controller-manager-kind-control-plane 1/1 Running 0 32m kube-system pod/kube-proxy-qw9fj 1/1 Running 0 31m kube-system pod/kube-scheduler-kind-control-plane 1/1 Running 0 32m kube-system pod/metrics-server-8ff8f88c6-69t9z 0/1 Running 0 4m21s local-path-storage pod/local-path-provisioner-684f458cdd-f6zqf 1/1 Running 0 31m metallb-system pod/controller-84d6d4db45-bph5x 1/1 Running 0 29m metallb-system pod/speaker-pcl4p 1/1 Running 0 29m projectcontour pod/contour-7b9b9cdfd6-h5jzg 1/1 Running 0 6m43s projectcontour pod/contour-7b9b9cdfd6-nhbq2 1/1 Running 0 6m43s projectcontour pod/contour-certgen-v1.23.2-hxh7k 0/1 Completed 0 6m43s projectcontour pod/envoy-v4xk9 2/2 Running 0 6m43s servicebinding-system pod/servicebinding-controller-manager-85f7498cf-xd7jc 2/2 Running 0 115s NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cert-manager service/cert-manager ClusterIP 10.96.153.49 9402/TCP 13m cert-manager service/cert-manager-webhook ClusterIP 10.96.102.82 443/TCP 13m default service/kubernetes ClusterIP 10.96.0.1 443/TCP 32m kpack service/kpack-webhook ClusterIP 10.96.227.201 443/TCP 9m40s kube-system service/kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 32m kube-system service/metrics-server ClusterIP 10.96.204.62 443/TCP 4m21s metallb-system service/webhook-service ClusterIP 10.96.186.139 443/TCP 29m projectcontour service/contour ClusterIP 10.96.138.58 8001/TCP 6m43s projectcontour service/envoy LoadBalancer 10.96.126.44 172.18.255.200 80:30632/TCP,443:30730/TCP 6m43s servicebinding-system service/servicebinding-controller-manager-metrics-service ClusterIP 10.96.147.189 8443/TCP 115s servicebinding-system service/servicebinding-webhook-service ClusterIP 10.96.14.224 443/TCP 115s

What are some alternatives?

When comparing cert-manager and metrics-server you can also consider the following projects:

metallb - A network load-balancer implementation for Kubernetes using standard routing protocols

prometheus - The Prometheus monitoring system and time series database.

aws-load-balancer-controller - A Kubernetes controller for Elastic Load Balancers

k8s-prometheus-adapter - An implementation of the custom.metrics.k8s.io API using Prometheus

Portainer - Making Docker and Kubernetes management easy.

kube-state-metrics - Add-on agent to generate and expose cluster-level metrics.

awx-operator - An Ansible AWX operator for Kubernetes built with Operator SDK and Ansible. 🤖

kube-prometheus - Use Prometheus to monitor Kubernetes and applications running on Kubernetes

k3s - Lightweight Kubernetes

istio - Connect, secure, control, and observe services.

oauth2-proxy - A reverse proxy that provides authentication with Google, Azure, OpenID Connect and many more identity providers.

k9s - 🐶 Kubernetes CLI To Manage Your Clusters In Style!