enhancements VS external-dns

Compare enhancements vs external-dns and see what are their differences.

external-dns

Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services (by kubernetes-sigs)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
enhancements external-dns
58 79
3,257 7,258
1.6% 1.9%
9.7 9.6
2 days ago 2 days ago
Go Go
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

enhancements

Posts with mentions or reviews of enhancements. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-19.
  • IBM to buy HashiCorp in $6.4B deal
    1 project | news.ycombinator.com | 25 Apr 2024
    > was always told early on that although they supported vault on kubernetes via a helm chart, they did not recommend using it on anything but EC2 instances (because of "security" which never really made sense their reasoning).

    The reasoning is basically that there are some security and isolation guarantees you don't get in Kubernetes that you do get on bare metal or (to a somewhat lesser extent) in VMs.

    In particular for Kubernetes, Vault wants to run as a non-root user and set the IPC_LOCK capability when it starts to prevent its memory from being swapped to disk. While in Docker you can directly enable this by adding capabilities when you launch the container, Kubernetes has an issue because of the way it handles non-root container users specified in a pod manifest, detailed in a (long-dormant) KEP: https://github.com/kubernetes/enhancements/blob/master/keps/... (tl;dr: Kubernetes runs the container process as root, with the specified capabilities added, but then switches it to the non-root UID, which causes the explicitly-added capabilities to be dropped).

    You can work around this by rebuilding the container and setting the capability directly on the binary, but the upstream build of the binary and the one in the container image don't come with that set (because the user should set it at runtime if running the container image directly, and the systemd unit sets it via systemd if running as a systemd service, so there's no need to do that except for working around Kubernetes' ambient-capability issue).

    > It always surprised me how these conversations went. "Well we don't really recommend kubernetes so we won't support (feature)."

  • Exploring cgroups v2 and MemoryQoS With EKS and Bottlerocket
    7 projects | dev.to | 19 Feb 2024
    0 is not the request we've defined. And that makes sense. Memory QoS has been in alpha since Kubernetes 1.22 (August 2021) and according to the KEP data was still in alpha as of 1.27.
  • Jenkins Agents On Kubernetes
    7 projects | dev.to | 4 Sep 2023
    Note: There's actually a Structured Authentication Config established via KEP-3331. It's in v1.28 as a feature flag gated option and removes the limitation of only having one OIDC provider. I may look into doing an article on it, but for now I'll deal with the issue in a manner that should work even with a bit older versions versions of Kubernetes.
  • Isint release cycle becoming a bit crazy with monthly releases and deprecations ?
    2 projects | /r/kubernetes | 11 Jul 2023
    Kubernetes supports a skew policy of n+2 between API server and kubelet. This means if your CP and DP are both on 1.20, you could upgrade your control plane twice (1.20 -> 1.21 -> 1.22) before you need to upgrade your data plane. And when it comes time to upgrade your data plane you can jump from 1.20 to 1.22 to minimize update churn. In the future, this skew will be opened to n+3 https://github.com/kubernetes/enhancements/tree/master/keps/sig-architecture/3935-oldest-node-newest-control-plane
  • Kubernetes SidecarContainers feature is merged
    7 projects | news.ycombinator.com | 10 Jul 2023
    The KEP (Kubernetes Enhancement Proposal) is linked to in the PR [1]. From the summary:

    > Sidecar containers are a new type of containers that start among the Init containers, run through the lifecycle of the Pod and don’t block pod termination. Kubelet makes a best effort to keep them alive and running while other containers are running.

    [1] https://github.com/kubernetes/enhancements/tree/master/keps/...

  • What's there in K8s 1.27
    1 project | dev.to | 4 Jun 2023
    This is where the new feature of mutable scheduling directives for jobs comes into play. This feature enables the updating of a job's scheduling directives before it begins. Essentially, it allows custom queue controllers to influence pod placement without needing to directly handle the assignment of pods to nodes themselves. To learn more about this check out the Kubernetes Enhancement Proposal 2926.
  • Dependencies between Services
    1 project | /r/kubernetes | 6 Apr 2023
    What your asking is a (vanilla) Kubernetes non-goal, others have mentioned fluxcd and other add ons that provide primitives for dependency aware deployments. The problem space is so large, that it's unreasonable to to address these concerns in Kubernetes itself, instead, make it extensible... Look at this KEP for example: https://github.com/kubernetes/enhancements/issues/753 Sidecar containers have existed, and been named as such since WAY before that KEP's inception, defining what these things should and shouldn't do is largely arbitrary. Aka: your use-case is niche, if you don't like the behavior, use flux or argo, or write something yourself.
  • When you learn the Sidecar Container KEP got dropped from the Kubernets release. Again.
    2 projects | /r/kubernetes | 6 Apr 2023
  • Kubernetes 1.27 will be out next week! - Learn what's new and what's deprecated - Group volume snapshots - Pod resource updates - kubectl subcommands … And more!
    2 projects | /r/kubernetes | 4 Apr 2023
    If further interested, I may recommend checking out the KEP. I love how they document the decision making, and all these edge cases :).
  • How can I force assign an IP to my Load Balancer ingress in “status.loadBalancer”?
    1 project | /r/kubernetes | 4 Apr 2023
    See https://kubernetes.io/docs/reference/kubectl/conventions/#subresources and https://github.com/kubernetes/enhancements/issues/2590

external-dns

Posts with mentions or reviews of external-dns. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-03.
  • Upgrading Hundreds of Kubernetes Clusters
    17 projects | dev.to | 3 Apr 2024
    The second one is a combination of tools: External DNS, cert-manager, and NGINX ingress. Using these as a stack, you can quickly deploy an application, making it available through a DNS with a TLS without much effort via simple annotations. When I first discovered External DNS, I was amazed at its quality.
  • Kubernetes External DNS provider for Hetzner
    2 projects | /r/hetzner | 3 Nov 2023
    One of the reasons why I chose Hetzner was that it WAS supported by the ExternalDNS project. I didn't quite understand why the Hetzner provider was pulled, but I saw that an attempt of re-adding it was refused, on the ground that the upcoming webhook architecture would have allowed to better maintain providers.
  • Istio Multi-Cluster Setup
    1 project | /r/kubernetes | 25 Jun 2023
    Write a custom controller for the external DNS controller, or setup some form of ArgoCD app / appset templating.
  • Looking for ExternalDns alternative for non k8s environment
    1 project | /r/Traefik | 5 Jun 2023
    so I am looking at having an automated way for new routers registered in Traefik to also have the corresponding DNS entry added to my Pihole instance similar to external-dns but obviously, this is exclusive to ingress on k8s environments. my current setup is traefik in a container on unraid.
  • Is a Load Balancer necessary for a HA Cluster?
    1 project | /r/kubernetes | 21 May 2023
    You technically don’t need to run a load balancer or have a virtual IP for your control plane. If you control your dns, you can add an A record pointing to all IPs for your control plane nodes. It won’t load balance your traffic, but combined with something like External DNS it gives you HA for the control plane.
  • How can I assign an EIP to a Kubernetes deployment?
    1 project | /r/aws | 9 May 2023
    I normally deploy external-dns, which automatically updates DNS with the ingress controller's external IP address.
  • Registering DNS with Windows Domain DNS
    1 project | /r/kubernetes | 24 Apr 2023
    Background: Having a look I can see this https://github.com/kubernetes-sigs/external-dns
  • Cluster nodes on different networks
    1 project | /r/kubernetes | 4 Apr 2023
    3) Use the Kubernetes External-DNS. I've never used this, but this is assuming it can update DNS for each pods/app to point to the correct Node (it'd need to update my Homelab DNS running on Windows Server)
  • I am stuck on learning how to provision K8s in AWS. Security groups? ALB? ACM? R53?
    3 projects | /r/Terraform | 19 Mar 2023
    So here’s the solution I have taken for our current stack. EKS and its dependencies are created through terraform using the eks module as well as provision a route53 subdomain and a wildcard cert. Once we have that created, I have installed this deployment into the cluster via the helm module: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/. This allows me to use kuberentes resources (load balancers or ingress objects) and it will handle all the provisioning of load balancers and security groups for me, based on my application yaml and annotations. We also use https://github.com/kubernetes-sigs/external-dns to manage all of our specific host names for the applications through annotations. So to generally put, terraform manages out Kubernetes clusters, and Kubernetes manages the deployment of anything needed for the application including volumes, load balancers, hostnames though Kubernetes system deployments
  • How to expose services/apps to my home network with custom DNS names
    1 project | /r/kubernetes | 26 Feb 2023
    Metallb for your load balancer (layer2 mode will do) NginX-ingress, will be spot on for internal home apps External-dns to publish your dns record to your Dns server at home, https://github.com/kubernetes-sigs/external-dns

What are some alternatives?

When comparing enhancements and external-dns you can also consider the following projects:

kubeconform - A FAST Kubernetes manifests validator, with support for Custom Resources!

metallb - A network load-balancer implementation for Kubernetes using standard routing protocols

spark-operator - Kubernetes operator for managing the lifecycle of Apache Spark applications on Kubernetes.

cloudflare-ingress-controller - A Kubernetes ingress controller for Cloudflare's Argo Tunnels

kubernetes-json-schema - Schemas for every version of every object in every version of Kubernetes

ingress-nginx - Ingress-NGINX Controller for Kubernetes

klipper-lb - Embedded service load balancer in Klipper

crossplane - The Cloud Native Control Plane

Hey - HTTP load generator, ApacheBench (ab) replacement

PowerDNS - PowerDNS Authoritative, PowerDNS Recursor, dnsdist

connaisseur - An admission controller that integrates Container Image Signature Verification into a Kubernetes cluster

awx-operator - An Ansible AWX operator for Kubernetes built with Operator SDK and Ansible. 🤖