amazon-vpc-cni-k8s
kind
amazon-vpc-cni-k8s | kind | |
---|---|---|
12 | 183 | |
2,201 | 12,818 | |
0.8% | 1.2% | |
9.2 | 8.9 | |
7 days ago | 13 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
amazon-vpc-cni-k8s
- How does configuring AWS EKS works?
-
EKS Worker Nodes on RHEL 8?
The same approach hasn't worked very well or very consistently with RHEL. I'm using containerd as the runtime. Because iptables-legacy is hardcoded out of RHEL 8, I'm using iptables-nft (installed on OS). I use Terraform to deploy the cluster and provide configuration values to tell vpc-cni to enable nftables (noted at the bottom here: https://github.com/aws/amazon-vpc-cni-k8s/blob/master/docs/troubleshooting.md), as well as to install and enable ipvs on kube-proxy.
-
New-Amazon EKS and Amazon EKS Distro now support Kubernetes version 1.26
Virtual Private Cloud (VPC) Container Network Interface (CNI) plugin. You must upgrade your VPC CNI plugin version to 1.12 or higher. Earlier versions of the VPC CNI will cause the CNI to crash, because it relied on the CRI v1alpha2API, which has been removed from Kubernetes v1.26. For step-by-step instructions to upgrade the VPC CNI in your cluster, refer to Working with the Amazon VPC CNI plugin for Kubernetes Amazon EKS add-on.
-
Blog: KWOK: Kubernetes WithOut Kubelet
I believe you're correct, although pedantically that would only apply if one is using their vpc-cni <https://github.com/aws/amazon-vpc-cni-k8s#readme> and not with a competing CNI. Kubelet offers a configurable for the number of Pods per Node <https://github.com/kubernetes/kubelet/blob/v0.26.2/config/v1...> which defaults to 110 for what I would presume is CIDR or pid cgroups reasons and thus is unlikely to differ by instance size as the ENI limit you mention does (IIRC)
-
Pods stuck in ContainerCreating with "failed to assign an IP address to container"
Upgraded to v1.12 on EKS and CNI 1.5.0. This issue was closed stating CNI 1.5.0 solved the issue. It did not for us. In another thread leaking ENIs was blamed but was also closed due to CNI upgrade.
-
How to understand the IP and host of client under company's VPN
Take a look a the github repo for the EKS CNI I think the parameter AWS_VPC_K8S_CNI_RANDOMIZESNAT will address the port issue. We had a similar problem and this worked around it. (we did end up solving it another way)
-
EKS and the quest for IP addresses: Secondary CIDR ranges and private NAT gateways
EKS, the managed Kubernetes offering by AWS, by default uses the Amazon VPC CNI plugin for Kubernetes. Different to most networking implementations, this assigns each pod a dedicated IP address in the VPC, the network the nodes reside in.
- aws/amazon-vpc-cni-k8s: Networking plugin repository for pod networking in Kubernetes using Elastic Network Interfaces on AWS
-
EKS Cluster Nodes stuck in NotReady state (missing cni config/binary)
You might be able to get better help or research closed issues on the github issues page. https://github.com/aws/amazon-vpc-cni-k8s/issues . Are you able to scale up your old node group with the smaller instance size and see if it works. The few times I hit issues around the network not being ready on a worker node in EKS, it ended up being permission related issue. Wondering if there are some missing permissions on the new node group role or on the aws-node iam role. Make sure the aws-node role has AmazonEKS_CNI_Policy policy attached to it.
-
EKS VPC CNI add-on: Support for high pod density in node
By default, the number of IP addresses available to assign to pods is based on the number of IP addresses assigned to Elastic network interfaces and the number of network interfaces attached to your Amazon EC2 node. The Amazon VPC CNI add-on (v1.9.0 or later) can be configured to assign /28 (16 IP addresses) IP address prefixes, instead of assigning individual IP addresses to network interfaces.
kind
-
Take a look at traefik, even if you don't use containers
Have you tried https://kind.sigs.k8s.io/? If so, how does it compare to k3s for testing?
-
How to distribute workloads using Open Cluster Management
To get started, you'll need to install clusteradm and kubectl and start up three Kubernetes clusters. To simplify cluster administration, this article starts up three kind clusters with the following names and purposes:
-
15 Options To Build A Kubernetes Playground (with Pros and Cons)
Kind: is a tool for running local Kubernetes clusters using Docker container "nodes." It was primarily designed for testing Kubernetes itself but can also be used for local development or continuous integration.
-
Exploring OpenShift with CRC
Fortunately, just as projects like kind and Minikube enable developers to spin up a local Kubernetes environment in no time, CRC, also known as OpenShift Local and a recursive acronym for "CRC - Runs Containers", offers developers a local OpenShift environment by means of a pre-configured VM similar to how Minikube works under the hood.
-
K3s Traefik Ingress - configured for your homelab!
I recently purchased a used Lenovo M900 Think Centre (i7 with 32GB RAM) from eBay to expand my mini-homelab, which was just a single Synology DS218+ plugged into my ISP's router (yuck!). Since I've been spending a big chunk of time at work playing around with Kubernetes, I figured that I'd put my skills to the test and run a k3s node on the new server. While I was familiar with k3s before starting this project, I'd never actually run it before, opting for tools like kind (and minikube before that) to run small test clusters for my local development work.
-
Mykube - simple cli for single node K8S creatiom
Features compared to https://kind.sigs.k8s.io/
-
Hacking in kind (Kubernetes in Docker)
Kind allows you to run a Kubernetes cluster inside Docker. This is incredibly useful for developing Helm charts, Operators, or even just testing out different k8s features in a safe way.
-
Choosing the Next Step: Docker Swarm or Kubernetes After Mastering Docker?
Check out KinD
-
K3s – Lightweight Kubernetes
If you're just messing around, just use kind (https://kind.sigs.k8s.io) or minikube if you want VMs (https://minikube.sigs.k8s.io). Both work on ARM-based platforms.
You can also use k3s; it's hella easy to get started with and it works great.
-
Two approaches to make your APIs more secure
We'll install APIClarity into a Kubernetes cluster to test our API documentation. We're using a Kind cluster for demonstration purposes. Of course, if you have another Kubernetes cluster up and running elsewhere, all steps also work there.
What are some alternatives?
istio - Connect, secure, control, and observe services.
minikube - Run Kubernetes locally
multus-cni - A CNI meta-plugin for multi-homed pods in Kubernetes
k3d - Little helper to run CNCF's k3s in Docker
lima - Linux virtual machines, with a focus on running containers
amazon-eks-ami - Packer configuration for building a custom EKS AMI
vcluster - vCluster - Create fully functional virtual Kubernetes clusters - Each vcluster runs inside a namespace of the underlying k8s cluster. It's cheaper than creating separate full-blown clusters and it offers better multi-tenancy and isolation than regular namespaces.
kubelet - kubelet component configs
colima - Container runtimes on macOS (and Linux) with minimal setup
nerdctl - contaiNERD CTL - Docker-compatible CLI for containerd, with support for Compose, Rootless, eStargz, OCIcrypt, IPFS, ...