amazon-eks-ami VS amazon-vpc-cni-k8s

Compare amazon-eks-ami vs amazon-vpc-cni-k8s and see what are their differences.

amazon-eks-ami

Packer configuration for building a custom EKS AMI (by awslabs)

amazon-vpc-cni-k8s

Networking plugin repository for pod networking in Kubernetes using Elastic Network Interfaces on AWS (by aws)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
amazon-eks-ami amazon-vpc-cni-k8s
19 12
2,345 2,197
1.6% 1.5%
9.2 9.2
7 days ago 4 days ago
Shell Go
MIT No Attribution Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

amazon-eks-ami

Posts with mentions or reviews of amazon-eks-ami. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-05.
  • [Request for opinion] : CPU limits in the K8s world
    1 project | /r/kubernetes | 10 Dec 2023
    Careful assuming system reserved will be present. Last I checked, AWS EKS does not have system reserved resources for the kubelet by default and as a result, pods can starve those for resources (e.g., https://github.com/awslabs/amazon-eks-ami/issues/79). This is of course more important for memory, but could impact CPU as well.
  • Compile Linux Kernel 6.x on AL2? 😎
    2 projects | /r/aws | 5 Jun 2023
    For example, this is available for AL2: https://github.com/awslabs/amazon-eks-ami
  • Hands-on lab for studying the EKS, which scenarios I should learn?
    1 project | /r/kubernetes | 10 May 2023
    I found this document that lists the pod limits per node size. I suspect you will want to consider larger worker nodes or you will very quickly be unable to schedule additional workloads.
  • k3s on AWS,does it make sense?
    3 projects | /r/kubernetes | 4 May 2023
    source
  • EKS Worker Nodes on RHEL 8?
    2 projects | /r/kubernetes | 3 May 2023
  • Five Rookie Mistakes with Kubernetes on AWS. Which were yours?
    1 project | /r/kubernetes | 21 Apr 2023
    Issue 1 is a known issue due to memory reservation being to low, see e.g. https://github.com/awslabs/amazon-eks-ami/issues/1145
  • EKS: Shoudnt nodes autoscaling group take pods limit into consideration?
    1 project | /r/aws | 12 Apr 2023
    No, the new node is added if there are not enough resiurces to start a new pod. So if you have many pods with small resource usage you can hit the pod per node limit, on eks you have a max number of pods depending on the instance type - https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt You can incerase that limit : https://docs.aws.amazon.com/eks/latest/userguide/cni-increase-ip-addresses.html
  • Blog: KWOK: Kubernetes WithOut Kubelet
    8 projects | news.ycombinator.com | 1 Mar 2023
    # of pods are essentially capped by the worker node choice.

    below excerpt from: https://github.com/awslabs/amazon-eks-ami/blob/master/files/...

      # Mapping is calculated from AWS EC2 API using the following formula:
  • Tips on working with EKS
    2 projects | /r/kubernetes | 7 Feb 2023
    See also: EKS nodes lose readiness when containers exhaust memory
  • Best managed kubernetes platform
    1 project | /r/kubernetes | 22 Oct 2022
    So it manifests itself in this way: your pod is scheduled but remains pending forever. You check the logs and you see that it's complaining that the an IP address. Ultimately, if you check here, you see the maximum number of pods that can be scheduled on any underlying ec2 instance, even if you have remaining IPs in your subnet. I found this to be one of the most poorly understood phenomena in EKS. Even those who claimed to "crack" it and wrote fancy blog posts about it fundamentally got it wrong. AFAIK this document reflects the official AWS guide on how to mitigate this.

amazon-vpc-cni-k8s

Posts with mentions or reviews of amazon-vpc-cni-k8s. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-03.
  • How does configuring AWS EKS works?
    1 project | /r/kubernetes | 2 Jul 2023
  • EKS Worker Nodes on RHEL 8?
    2 projects | /r/kubernetes | 3 May 2023
    The same approach hasn't worked very well or very consistently with RHEL. I'm using containerd as the runtime. Because iptables-legacy is hardcoded out of RHEL 8, I'm using iptables-nft (installed on OS). I use Terraform to deploy the cluster and provide configuration values to tell vpc-cni to enable nftables (noted at the bottom here: https://github.com/aws/amazon-vpc-cni-k8s/blob/master/docs/troubleshooting.md), as well as to install and enable ipvs on kube-proxy.
  • New-Amazon EKS and Amazon EKS Distro now support Kubernetes version 1.26
    2 projects | dev.to | 24 Apr 2023
    Virtual Private Cloud (VPC) Container Network Interface (CNI) plugin. You must upgrade your VPC CNI plugin version to 1.12 or higher. Earlier versions of the VPC CNI will cause the CNI to crash, because it relied on the CRI v1alpha2API, which has been removed from Kubernetes v1.26. For step-by-step instructions to upgrade the VPC CNI in your cluster, refer to Working with the Amazon VPC CNI plugin for Kubernetes Amazon EKS add-on.
  • Blog: KWOK: Kubernetes WithOut Kubelet
    8 projects | news.ycombinator.com | 1 Mar 2023
    I believe you're correct, although pedantically that would only apply if one is using their vpc-cni <https://github.com/aws/amazon-vpc-cni-k8s#readme> and not with a competing CNI. Kubelet offers a configurable for the number of Pods per Node <https://github.com/kubernetes/kubelet/blob/v0.26.2/config/v1...> which defaults to 110 for what I would presume is CIDR or pid cgroups reasons and thus is unlikely to differ by instance size as the ENI limit you mention does (IIRC)
  • Pods stuck in ContainerCreating with "failed to assign an IP address to container"
    1 project | /r/codehunter | 2 Sep 2022
    Upgraded to v1.12 on EKS and CNI 1.5.0. This issue was closed stating CNI 1.5.0 solved the issue. It did not for us. In another thread leaking ENIs was blamed but was also closed due to CNI upgrade.
  • How to understand the IP and host of client under company's VPN
    1 project | /r/kubernetes | 17 May 2022
    Take a look a the github repo for the EKS CNI I think the parameter AWS_VPC_K8S_CNI_RANDOMIZESNAT will address the port issue. We had a similar problem and this worked around it. (we did end up solving it another way)
  • EKS and the quest for IP addresses: Secondary CIDR ranges and private NAT gateways
    1 project | dev.to | 10 Feb 2022
    EKS, the managed Kubernetes offering by AWS, by default uses the Amazon VPC CNI plugin for Kubernetes. Different to most networking implementations, this assigns each pod a dedicated IP address in the VPC, the network the nodes reside in.
  • aws/amazon-vpc-cni-k8s: Networking plugin repository for pod networking in Kubernetes using Elastic Network Interfaces on AWS
    1 project | /r/devopsish | 23 Oct 2021
  • EKS Cluster Nodes stuck in NotReady state (missing cni config/binary)
    1 project | /r/aws | 30 Sep 2021
    You might be able to get better help or research closed issues on the github issues page. https://github.com/aws/amazon-vpc-cni-k8s/issues . Are you able to scale up your old node group with the smaller instance size and see if it works. The few times I hit issues around the network not being ready on a worker node in EKS, it ended up being permission related issue. Wondering if there are some missing permissions on the new node group role or on the aws-node iam role. Make sure the aws-node role has AmazonEKS_CNI_Policy policy attached to it.
  • EKS VPC CNI add-on: Support for high pod density in node
    1 project | dev.to | 7 Aug 2021
    By default, the number of IP addresses available to assign to pods is based on the number of IP addresses assigned to Elastic network interfaces and the number of network interfaces attached to your Amazon EC2 node. The Amazon VPC CNI add-on (v1.9.0 or later) can be configured to assign /28 (16 IP addresses) IP address prefixes, instead of assigning individual IP addresses to network interfaces.

What are some alternatives?

When comparing amazon-eks-ami and amazon-vpc-cni-k8s you can also consider the following projects:

calico - Cloud native networking and network security

istio - Connect, secure, control, and observe services.

amazon-eks-pod-identity-webhook - Amazon EKS Pod Identity Webhook

multus-cni - A CNI meta-plugin for multi-homed pods in Kubernetes

prometheus - The Prometheus monitoring system and time series database.

minikube - Run Kubernetes locally

envoy - Cloud-native high-performance edge/middle/service proxy

kubelet - kubelet component configs

skopeo - Work with remote images registries - retrieving information, images, signing content

kind - Kubernetes IN Docker - local clusters for testing Kubernetes

Grafana - The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.

k3d - Little helper to run CNCF's k3s in Docker