multus-cni
A CNI meta-plugin for multi-homed pods in Kubernetes (by intel)
amazon-vpc-cni-k8s
Networking plugin repository for pod networking in Kubernetes using Elastic Network Interfaces on AWS (by aws)
multus-cni | amazon-vpc-cni-k8s | |
---|---|---|
6 | 12 | |
2,195 | 2,197 | |
3.8% | 0.6% | |
8.0 | 9.2 | |
1 day ago | 7 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
multus-cni
Posts with mentions or reviews of multus-cni.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-06-04.
-
Run a pod with static MAC address
I have to plan the migration of our software we are running on a single node with docker-compose to K8, and i'm kinda lost in the plugins and the basic settings. One of our MS has to use static MAC address in order to run fine, bcs it's using a certificate. Is it possible to run a pod with static MAC address, or do you suggest to use multus-cni or something like this?
-
I could use some help figuring out which CNI to use
Multus CNI plug-in: https://github.com/k8snetworkplumbingwg/multus-cni
- Two different networks
-
K3s- v1.24
I opened a PR to add support for CNI v1.0.0 to Multus (https://github.com/k8snetworkplumbingwg/multus-cni/pull/879) but it was closed in July because 4.0 was "pretty near". Of course now it's almost February and we haven't seen so much as an Alpha of 4.0 since October. Sure wish they'd get their act together.
-
Kubernetes with Kubeadm
Multus
-
Considering (and deciding against) a switch from Traefik to an Envoy-based Ingress Controller
One thing I will note is that in the end only one process can listen at a certain endpoint (let's say 80 or 443) at the end of the day without some other complexity, AFAIK there isn't an Multus-like project for Ingress Controllers... A 2 tier setup would work but that just feels like overkill (especially for me).
amazon-vpc-cni-k8s
Posts with mentions or reviews of amazon-vpc-cni-k8s.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-05-03.
- How does configuring AWS EKS works?
-
EKS Worker Nodes on RHEL 8?
The same approach hasn't worked very well or very consistently with RHEL. I'm using containerd as the runtime. Because iptables-legacy is hardcoded out of RHEL 8, I'm using iptables-nft (installed on OS). I use Terraform to deploy the cluster and provide configuration values to tell vpc-cni to enable nftables (noted at the bottom here: https://github.com/aws/amazon-vpc-cni-k8s/blob/master/docs/troubleshooting.md), as well as to install and enable ipvs on kube-proxy.
-
New-Amazon EKS and Amazon EKS Distro now support Kubernetes version 1.26
Virtual Private Cloud (VPC) Container Network Interface (CNI) plugin. You must upgrade your VPC CNI plugin version to 1.12 or higher. Earlier versions of the VPC CNI will cause the CNI to crash, because it relied on the CRI v1alpha2API, which has been removed from Kubernetes v1.26. For step-by-step instructions to upgrade the VPC CNI in your cluster, refer to Working with the Amazon VPC CNI plugin for Kubernetes Amazon EKS add-on.
-
Blog: KWOK: Kubernetes WithOut Kubelet
I believe you're correct, although pedantically that would only apply if one is using their vpc-cni <https://github.com/aws/amazon-vpc-cni-k8s#readme> and not with a competing CNI. Kubelet offers a configurable for the number of Pods per Node <https://github.com/kubernetes/kubelet/blob/v0.26.2/config/v1...> which defaults to 110 for what I would presume is CIDR or pid cgroups reasons and thus is unlikely to differ by instance size as the ENI limit you mention does (IIRC)
-
Pods stuck in ContainerCreating with "failed to assign an IP address to container"
Upgraded to v1.12 on EKS and CNI 1.5.0. This issue was closed stating CNI 1.5.0 solved the issue. It did not for us. In another thread leaking ENIs was blamed but was also closed due to CNI upgrade.
-
How to understand the IP and host of client under company's VPN
Take a look a the github repo for the EKS CNI I think the parameter AWS_VPC_K8S_CNI_RANDOMIZESNAT will address the port issue. We had a similar problem and this worked around it. (we did end up solving it another way)
-
EKS and the quest for IP addresses: Secondary CIDR ranges and private NAT gateways
EKS, the managed Kubernetes offering by AWS, by default uses the Amazon VPC CNI plugin for Kubernetes. Different to most networking implementations, this assigns each pod a dedicated IP address in the VPC, the network the nodes reside in.
- aws/amazon-vpc-cni-k8s: Networking plugin repository for pod networking in Kubernetes using Elastic Network Interfaces on AWS
-
EKS Cluster Nodes stuck in NotReady state (missing cni config/binary)
You might be able to get better help or research closed issues on the github issues page. https://github.com/aws/amazon-vpc-cni-k8s/issues . Are you able to scale up your old node group with the smaller instance size and see if it works. The few times I hit issues around the network not being ready on a worker node in EKS, it ended up being permission related issue. Wondering if there are some missing permissions on the new node group role or on the aws-node iam role. Make sure the aws-node role has AmazonEKS_CNI_Policy policy attached to it.
-
EKS VPC CNI add-on: Support for high pod density in node
By default, the number of IP addresses available to assign to pods is based on the number of IP addresses assigned to Elastic network interfaces and the number of network interfaces attached to your Amazon EC2 node. The Amazon VPC CNI add-on (v1.9.0 or later) can be configured to assign /28 (16 IP addresses) IP address prefixes, instead of assigning individual IP addresses to network interfaces.
What are some alternatives?
When comparing multus-cni and amazon-vpc-cni-k8s you can also consider the following projects:
cilium - eBPF-based Networking, Security, and Observability
istio - Connect, secure, control, and observe services.
kilo - Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes (k8s + wg = kg)
minikube - Run Kubernetes locally
antrea - Kubernetes networking based on Open vSwitch
amazon-eks-ami - Packer configuration for building a custom EKS AMI
ingress - WIP Caddy 2 ingress controller for Kubernetes
kubelet - kubelet component configs
kube-router - Kube-router, a turnkey solution for Kubernetes networking.
kind - Kubernetes IN Docker - local clusters for testing Kubernetes
kube-ovn - A Bridge between SDN and Cloud Native (Project under CNCF)
k3d - Little helper to run CNCF's k3s in Docker
multus-cni vs cilium
amazon-vpc-cni-k8s vs istio
multus-cni vs kilo
amazon-vpc-cni-k8s vs minikube
multus-cni vs antrea
amazon-vpc-cni-k8s vs amazon-eks-ami
multus-cni vs ingress
amazon-vpc-cni-k8s vs kubelet
multus-cni vs kube-router
amazon-vpc-cni-k8s vs kind
multus-cni vs kube-ovn
amazon-vpc-cni-k8s vs k3d