amazon-eks-ami
kind
amazon-eks-ami | kind | |
---|---|---|
19 | 183 | |
2,351 | 12,797 | |
0.8% | 1.0% | |
9.2 | 8.9 | |
4 days ago | 4 days ago | |
Shell | Go | |
MIT No Attribution | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
amazon-eks-ami
-
[Request for opinion] : CPU limits in the K8s world
Careful assuming system reserved will be present. Last I checked, AWS EKS does not have system reserved resources for the kubelet by default and as a result, pods can starve those for resources (e.g., https://github.com/awslabs/amazon-eks-ami/issues/79). This is of course more important for memory, but could impact CPU as well.
-
Compile Linux Kernel 6.x on AL2? 😎
For example, this is available for AL2: https://github.com/awslabs/amazon-eks-ami
-
Hands-on lab for studying the EKS, which scenarios I should learn?
I found this document that lists the pod limits per node size. I suspect you will want to consider larger worker nodes or you will very quickly be unable to schedule additional workloads.
-
k3s on AWS,does it make sense?
source
- EKS Worker Nodes on RHEL 8?
-
Five Rookie Mistakes with Kubernetes on AWS. Which were yours?
Issue 1 is a known issue due to memory reservation being to low, see e.g. https://github.com/awslabs/amazon-eks-ami/issues/1145
-
EKS: Shoudnt nodes autoscaling group take pods limit into consideration?
No, the new node is added if there are not enough resiurces to start a new pod. So if you have many pods with small resource usage you can hit the pod per node limit, on eks you have a max number of pods depending on the instance type - https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt You can incerase that limit : https://docs.aws.amazon.com/eks/latest/userguide/cni-increase-ip-addresses.html
-
Blog: KWOK: Kubernetes WithOut Kubelet
# of pods are essentially capped by the worker node choice.
below excerpt from: https://github.com/awslabs/amazon-eks-ami/blob/master/files/...
# Mapping is calculated from AWS EC2 API using the following formula:
-
Tips on working with EKS
See also: EKS nodes lose readiness when containers exhaust memory
-
Best managed kubernetes platform
So it manifests itself in this way: your pod is scheduled but remains pending forever. You check the logs and you see that it's complaining that the an IP address. Ultimately, if you check here, you see the maximum number of pods that can be scheduled on any underlying ec2 instance, even if you have remaining IPs in your subnet. I found this to be one of the most poorly understood phenomena in EKS. Even those who claimed to "crack" it and wrote fancy blog posts about it fundamentally got it wrong. AFAIK this document reflects the official AWS guide on how to mitigate this.
kind
-
Take a look at traefik, even if you don't use containers
Have you tried https://kind.sigs.k8s.io/? If so, how does it compare to k3s for testing?
-
How to distribute workloads using Open Cluster Management
To get started, you'll need to install clusteradm and kubectl and start up three Kubernetes clusters. To simplify cluster administration, this article starts up three kind clusters with the following names and purposes:
-
15 Options To Build A Kubernetes Playground (with Pros and Cons)
Kind: is a tool for running local Kubernetes clusters using Docker container "nodes." It was primarily designed for testing Kubernetes itself but can also be used for local development or continuous integration.
-
Exploring OpenShift with CRC
Fortunately, just as projects like kind and Minikube enable developers to spin up a local Kubernetes environment in no time, CRC, also known as OpenShift Local and a recursive acronym for "CRC - Runs Containers", offers developers a local OpenShift environment by means of a pre-configured VM similar to how Minikube works under the hood.
-
K3s Traefik Ingress - configured for your homelab!
I recently purchased a used Lenovo M900 Think Centre (i7 with 32GB RAM) from eBay to expand my mini-homelab, which was just a single Synology DS218+ plugged into my ISP's router (yuck!). Since I've been spending a big chunk of time at work playing around with Kubernetes, I figured that I'd put my skills to the test and run a k3s node on the new server. While I was familiar with k3s before starting this project, I'd never actually run it before, opting for tools like kind (and minikube before that) to run small test clusters for my local development work.
-
Mykube - simple cli for single node K8S creatiom
Features compared to https://kind.sigs.k8s.io/
-
Hacking in kind (Kubernetes in Docker)
Kind allows you to run a Kubernetes cluster inside Docker. This is incredibly useful for developing Helm charts, Operators, or even just testing out different k8s features in a safe way.
-
Choosing the Next Step: Docker Swarm or Kubernetes After Mastering Docker?
Check out KinD
-
K3s – Lightweight Kubernetes
If you're just messing around, just use kind (https://kind.sigs.k8s.io) or minikube if you want VMs (https://minikube.sigs.k8s.io). Both work on ARM-based platforms.
You can also use k3s; it's hella easy to get started with and it works great.
-
Two approaches to make your APIs more secure
We'll install APIClarity into a Kubernetes cluster to test our API documentation. We're using a Kind cluster for demonstration purposes. Of course, if you have another Kubernetes cluster up and running elsewhere, all steps also work there.
What are some alternatives?
calico - Cloud native networking and network security
minikube - Run Kubernetes locally
amazon-eks-pod-identity-webhook - Amazon EKS Pod Identity Webhook
k3d - Little helper to run CNCF's k3s in Docker
amazon-vpc-cni-k8s - Networking plugin repository for pod networking in Kubernetes using Elastic Network Interfaces on AWS
lima - Linux virtual machines, with a focus on running containers
prometheus - The Prometheus monitoring system and time series database.
vcluster - vCluster - Create fully functional virtual Kubernetes clusters - Each vcluster runs inside a namespace of the underlying k8s cluster. It's cheaper than creating separate full-blown clusters and it offers better multi-tenancy and isolation than regular namespaces.
envoy - Cloud-native high-performance edge/middle/service proxy
colima - Container runtimes on macOS (and Linux) with minimal setup
skopeo - Work with remote images registries - retrieving information, images, signing content
nerdctl - contaiNERD CTL - Docker-compatible CLI for containerd, with support for Compose, Rootless, eStargz, OCIcrypt, IPFS, ...