amazon-vpc-cni-k8s
Networking plugin repository for pod networking in Kubernetes using Elastic Network Interfaces on AWS (by aws)
minikube
Run Kubernetes locally (by kubernetes)
amazon-vpc-cni-k8s | minikube | |
---|---|---|
12 | 79 | |
2,201 | 28,434 | |
0.8% | 0.7% | |
9.2 | 10.0 | |
8 days ago | 6 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
amazon-vpc-cni-k8s
Posts with mentions or reviews of amazon-vpc-cni-k8s.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-05-03.
- How does configuring AWS EKS works?
-
EKS Worker Nodes on RHEL 8?
The same approach hasn't worked very well or very consistently with RHEL. I'm using containerd as the runtime. Because iptables-legacy is hardcoded out of RHEL 8, I'm using iptables-nft (installed on OS). I use Terraform to deploy the cluster and provide configuration values to tell vpc-cni to enable nftables (noted at the bottom here: https://github.com/aws/amazon-vpc-cni-k8s/blob/master/docs/troubleshooting.md), as well as to install and enable ipvs on kube-proxy.
-
New-Amazon EKS and Amazon EKS Distro now support Kubernetes version 1.26
Virtual Private Cloud (VPC) Container Network Interface (CNI) plugin. You must upgrade your VPC CNI plugin version to 1.12 or higher. Earlier versions of the VPC CNI will cause the CNI to crash, because it relied on the CRI v1alpha2API, which has been removed from Kubernetes v1.26. For step-by-step instructions to upgrade the VPC CNI in your cluster, refer to Working with the Amazon VPC CNI plugin for Kubernetes Amazon EKS add-on.
-
Blog: KWOK: Kubernetes WithOut Kubelet
I believe you're correct, although pedantically that would only apply if one is using their vpc-cni <https://github.com/aws/amazon-vpc-cni-k8s#readme> and not with a competing CNI. Kubelet offers a configurable for the number of Pods per Node <https://github.com/kubernetes/kubelet/blob/v0.26.2/config/v1...> which defaults to 110 for what I would presume is CIDR or pid cgroups reasons and thus is unlikely to differ by instance size as the ENI limit you mention does (IIRC)
-
Pods stuck in ContainerCreating with "failed to assign an IP address to container"
Upgraded to v1.12 on EKS and CNI 1.5.0. This issue was closed stating CNI 1.5.0 solved the issue. It did not for us. In another thread leaking ENIs was blamed but was also closed due to CNI upgrade.
-
How to understand the IP and host of client under company's VPN
Take a look a the github repo for the EKS CNI I think the parameter AWS_VPC_K8S_CNI_RANDOMIZESNAT will address the port issue. We had a similar problem and this worked around it. (we did end up solving it another way)
-
EKS and the quest for IP addresses: Secondary CIDR ranges and private NAT gateways
EKS, the managed Kubernetes offering by AWS, by default uses the Amazon VPC CNI plugin for Kubernetes. Different to most networking implementations, this assigns each pod a dedicated IP address in the VPC, the network the nodes reside in.
- aws/amazon-vpc-cni-k8s: Networking plugin repository for pod networking in Kubernetes using Elastic Network Interfaces on AWS
-
EKS Cluster Nodes stuck in NotReady state (missing cni config/binary)
You might be able to get better help or research closed issues on the github issues page. https://github.com/aws/amazon-vpc-cni-k8s/issues . Are you able to scale up your old node group with the smaller instance size and see if it works. The few times I hit issues around the network not being ready on a worker node in EKS, it ended up being permission related issue. Wondering if there are some missing permissions on the new node group role or on the aws-node iam role. Make sure the aws-node role has AmazonEKS_CNI_Policy policy attached to it.
-
EKS VPC CNI add-on: Support for high pod density in node
By default, the number of IP addresses available to assign to pods is based on the number of IP addresses assigned to Elastic network interfaces and the number of network interfaces attached to your Amazon EC2 node. The Amazon VPC CNI add-on (v1.9.0 or later) can be configured to assign /28 (16 IP addresses) IP address prefixes, instead of assigning individual IP addresses to network interfaces.
minikube
Posts with mentions or reviews of minikube.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-04-08.
-
Building Llama as a Service (LaaS)
With the containerized Node.js/Express API, I could run multiple containers, scaling to handle more traffic. Using a tool called minikube, we can easily spin up a local Kubernetes cluster to horizontally scale Docker containers. It was possible to keep one shared instance of the database, and many APIs were routed with an internal Kubernetes load balancer.
-
Can I scale my dockerized Flask solution with Kubernetes?
Install Minicube - a tool that allows us to spin up a Kubernetes cluster in a local machine Run minikube start to start your Kubernetes cluster Run minikube dashboard to spin up a web-based user interface that allows you to manage your Kubernetes cluster
-
K3s – Lightweight Kubernetes
If you're just messing around, just use kind (https://kind.sigs.k8s.io) or minikube if you want VMs (https://minikube.sigs.k8s.io). Both work on ARM-based platforms.
You can also use k3s; it's hella easy to get started with and it works great.
-
Developer’s Guide to Building Kubernetes Cloud Apps ☁️🚀
$ minikube addons enable dashboard 💡 dashboard is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub. You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS ▪ Using image docker.io/kubernetesui/dashboard:v2.7.0 ▪ Using image docker.io/kubernetesui/metrics-scraper:v1.0.8 🌟 The 'dashboard' addon is enabled $ minikube addons enable metrics-server 💡 metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub. You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS ▪ Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4 🌟 The 'metrics-server' addon is enabled $ minikube addons enable ingress 💡 ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub. You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS 💡 After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1" ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407 ▪ Using image registry.k8s.io/ingress-nginx/controller:v1.8.1 ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407 🔎 Verifying ingress addon... 🌟 The 'ingress' addon is enabled
-
Implementing TLS in Kubernetes
A Kubernetes distribution: You need to install a Kubernetes distribution to create the Kubernetes cluster and other necessary resources, such as deployments and services. This tutorial uses kind (v0.18.0), but you can use any other Kubernetes distribution, including minikube or K3s.
-
Sites you should know: Part One
3.Minikube ( https://minikube.sigs.k8s.io ):
-
Cannot stop 10 containers after Kubernetes minikube tutorial
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES7523fd2c20c7 gcr.io/google\_containers/k8s-dns-sidecar-amd64 "/sidecar --v=2 --..." 18 hours ago Up 18 hours k8s\_sidecar\_kube-dns-86f6f55dd5-qwc6z\_kube-system\_c1333ffc-e4d6-11e7-bccf-0021ccbf0996\_09bd438011406 gcr.io/google\_containers/k8s-dns-dnsmasq-nanny-amd64 "/dnsmasq-nanny -v..." 18 hours ago Up 18 hours k8s\_dnsmasq\_kube-dns-86f6f55dd5-qwc6z\_kube-system\_c1333ffc-e4d6-11e7-bccf-0021ccbf0996\_05c35e00a5a27 gcr.io/google\_containers/k8s-dns-kube-dns-amd64 "/kube-dns --domai..." 18 hours ago Up 18 hours k8s\_kubedns\_kube-dns-86f6f55dd5-qwc6z\_kube-system\_c1333ffc-e4d6-11e7-bccf-0021ccbf0996\_077ef463642b7 gcr.io/google\_containers/pause-amd64:3.0 "/pause" 18 hours ago Up 18 hours k8s\_POD\_kube-dns-86f6f55dd5-qwc6z\_kube-system\_c1333ffc-e4d6-11e7-bccf-0021ccbf0996\_039f618666205 gcr.io/google\_containers/kubernetes-dashboard-amd64 "/dashboard --inse..." 18 hours ago Up 18 hours k8s\_kubernetes-dashboard\_kubernetes-dashboard-vgpjl\_kube-system\_c1176a44-e4d6-11e7-bccf-0021ccbf0996\_0023b7b554a8c gcr.io/google\_containers/pause-amd64:3.0 "/pause" 18 hours ago Up 18 hours k8s\_POD\_kubernetes-dashboard-vgpjl\_kube-system\_c1176a44-e4d6-11e7-bccf-0021ccbf0996\_01c3bdb7bdeb1 gcr.io/google-containers/kube-addon-manager "/opt/kube-addons.sh" 18 hours ago Up 18 hours k8s\_kube-addon-manager\_kube-addon-manager-tpad\_kube-system\_7b19c3ba446df5355649563d32723e4f\_08a00feefa754 gcr.io/google\_containers/pause-amd64:3.0 "/pause" 18 hours ago Up 18 hours k8s\_POD\_kube-addon-manager-tpad\_kube-system\_7b19c3ba446df5355649563d32723e4f\_0b657eab5f6f5 gcr.io/k8s-minikube/storage-provisioner "/storage-provisioner" 18 hours ago Up 18 hours k8s\_storage-provisioner\_storage-provisioner\_kube-system\_c0a8b187-e4d6-11e7-bccf-0021ccbf0996\_067be5cc1dd0d gcr.io/google\_containers/pause-amd64:3.0 "/pause" 18 hours ago Up 18 hours k8s\_POD\_storage-provisioner\_kube-system\_c0a8b187-e4d6-11e7-bccf-0021ccbf0996\_0 I just did the Kubernetes minikube tutorial at https://github.com/kubernetes/minikube, and I cannot stop or remove these containers, they always get recreated.
- DNS issue of Alpine/musl solved?
-
DevOps experience without Kubernetes
https://github.com/kubernetes/minikube for local learning that's lightweight.
-
x509: certificate signed by unknown authority
Haven’t dabbled with minikube yet, but there’s a whole thread about this error here: https://github.com/kubernetes/minikube/issues/9798
What are some alternatives?
When comparing amazon-vpc-cni-k8s and minikube you can also consider the following projects:
istio - Connect, secure, control, and observe services.
colima - Container runtimes on macOS (and Linux) with minimal setup
multus-cni - A CNI meta-plugin for multi-homed pods in Kubernetes
lima - Linux virtual machines, with a focus on running containers
amazon-eks-ami - Packer configuration for building a custom EKS AMI
kind - Kubernetes IN Docker - local clusters for testing Kubernetes
kubelet - kubelet component configs
kubespray - Deploy a Production Ready Kubernetes Cluster
k9s - 🐶 Kubernetes CLI To Manage Your Clusters In Style!
k3d - Little helper to run CNCF's k3s in Docker
helm - The Kubernetes Package Manager