amazon-eks-ami
k0s
amazon-eks-ami | k0s | |
---|---|---|
19 | 32 | |
2,351 | 2,775 | |
0.8% | 5.3% | |
9.2 | 9.8 | |
4 days ago | 4 days ago | |
Shell | Go | |
MIT No Attribution | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
amazon-eks-ami
-
[Request for opinion] : CPU limits in the K8s world
Careful assuming system reserved will be present. Last I checked, AWS EKS does not have system reserved resources for the kubelet by default and as a result, pods can starve those for resources (e.g., https://github.com/awslabs/amazon-eks-ami/issues/79). This is of course more important for memory, but could impact CPU as well.
-
Compile Linux Kernel 6.x on AL2? 😎
For example, this is available for AL2: https://github.com/awslabs/amazon-eks-ami
-
Hands-on lab for studying the EKS, which scenarios I should learn?
I found this document that lists the pod limits per node size. I suspect you will want to consider larger worker nodes or you will very quickly be unable to schedule additional workloads.
-
k3s on AWS,does it make sense?
source
- EKS Worker Nodes on RHEL 8?
-
Five Rookie Mistakes with Kubernetes on AWS. Which were yours?
Issue 1 is a known issue due to memory reservation being to low, see e.g. https://github.com/awslabs/amazon-eks-ami/issues/1145
-
EKS: Shoudnt nodes autoscaling group take pods limit into consideration?
No, the new node is added if there are not enough resiurces to start a new pod. So if you have many pods with small resource usage you can hit the pod per node limit, on eks you have a max number of pods depending on the instance type - https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt You can incerase that limit : https://docs.aws.amazon.com/eks/latest/userguide/cni-increase-ip-addresses.html
-
Blog: KWOK: Kubernetes WithOut Kubelet
# of pods are essentially capped by the worker node choice.
below excerpt from: https://github.com/awslabs/amazon-eks-ami/blob/master/files/...
# Mapping is calculated from AWS EC2 API using the following formula:
-
Tips on working with EKS
See also: EKS nodes lose readiness when containers exhaust memory
-
Best managed kubernetes platform
So it manifests itself in this way: your pod is scheduled but remains pending forever. You check the logs and you see that it's complaining that the an IP address. Ultimately, if you check here, you see the maximum number of pods that can be scheduled on any underlying ec2 instance, even if you have remaining IPs in your subnet. I found this to be one of the most poorly understood phenomena in EKS. Even those who claimed to "crack" it and wrote fancy blog posts about it fundamentally got it wrong. AFAIK this document reflects the official AWS guide on how to mitigate this.
k0s
-
Seeking Guidance for Transitioning to Kubernetes and SRE/DevOps for traditional infrastructure team
I am myself studying it and going through the official documentation and toying with k8s flavors like kind, k3s and k0s.
-
I was so excited to join this community
There's a whole community of hobbyists building Raspberry Pi clusters, porting things to work on various Arm processors, exploring and contributing to minimalist distros like k0s and microk8s, etc.
- Blog: KWOK: Kubernetes WithOut Kubelet
-
KWOK : mettre en place un cluster de milliers de nœuds en quelques secondes …
root@localhost:~# curl -sSLf https://get.k0s.sh | sudo sh Downloading k0s from URL: https://github.com/k0sproject/k0s/releases/download/v1.25.4+k0s.0/k0s-v1.25.4+k0s.0-amd64 k0s is now executable in /usr/local/bin root@localhost:~# k0s install controller --single root@localhost:~# k0s start root@localhost:~# k0s status Version: v1.25.4+k0s.0 Process ID: 1064 Role: controller Workloads: true SingleNode: true Kube-api probing successful: true Kube-api probing last error: root@localhost:~# k0s kubectl cluster-info Kubernetes control plane is running at https://localhost:6443 CoreDNS is running at https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. 443/TCP 97s root@localhost:~# k0s kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME localhost Ready control-plane 100s v1.25.4+k0s 172.105.131.23 Ubuntu 22.04.1 LTS 5.15.0-47-generic containerd://1.6.9 root@localhost:~# curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.25.4/bin/linux/amd64/kubectl && chmod +x kubectl && mv kubectl /usr/bin/ % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 42.9M 100 42.9M 0 0 75.2M 0 --:--:-- --:--:-- --:--:-- 75.3M root@localhost:~# k0s kubeconfig admin > ~/.kube/config root@localhost:~# type kubectl kubectl is hashed (/usr/bin/kubectl) root@localhost:~# kubectl get po,svc -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/kube-proxy-clxh7 1/1 Running 0 3m56s kube-system pod/kube-router-88x25 1/1 Running 0 3m56s kube-system pod/coredns-5d5b5b96f9-4xzsl 1/1 Running 0 4m3s kube-system pod/metrics-server-69d9d66ff8-fxrt7 1/1 Running 0 4m2s NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.96.0.1 443/TCP 4m20s kube-system service/kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 4m8s kube-system service/metrics-server ClusterIP 10.98.18.100 443/TCP 4m2s
-
vcluster as a Service
I use k0s btw ,and it is fantastic.
-
Any Kubernetes provider you could recommend me?
Here is link number 1 - Previous text "k0s"
-
Some thoughts on cert-manager moving from Bazel to Make
So for example, in my own personal infra repos and for projects I do, Make orchestrates Pulumi, dnscontrol (Holy shit is that tool underrated), ansible, k0s/k0sctl (I run that distro), and all the kubernetes stuff.
-
Is the Synology NAS able to run a Kubernetes Cluster ?
I wasn’t able to run Kubernetes in NAS last time I tried it. https://github.com/k0sproject/k0s/issues/1184. As for public access you don’t want to do it for security reasons and instead rely on vpn. Tailscale and ZeroTier are easy to setup.
-
Kubernetes at Home With K3s
I prefer k0s, https://k0sproject.io/ .
-
Cloudflare Uses HashiCorp Nomad
actually that is not really true - i strongly urge you to try out http://k3s.io/ or https://k0sproject.io/
these are full-fledged, certified k8s distributions that run on raspberry pi as well as all the way in production.
https://www.youtube.com/results?search_query=raspberry+pi+k3...
What are some alternatives?
calico - Cloud native networking and network security
k3s - Lightweight Kubernetes
amazon-eks-pod-identity-webhook - Amazon EKS Pod Identity Webhook
k3d - Little helper to run CNCF's k3s in Docker
amazon-vpc-cni-k8s - Networking plugin repository for pod networking in Kubernetes using Elastic Network Interfaces on AWS
microk8s - MicroK8s is a small, fast, single-package Kubernetes for datacenters and the edge.
prometheus - The Prometheus monitoring system and time series database.
kind - Kubernetes IN Docker - local clusters for testing Kubernetes
envoy - Cloud-native high-performance edge/middle/service proxy
Gravitational Teleport - The easiest, and most secure way to access and protect all of your infrastructure.
skopeo - Work with remote images registries - retrieving information, images, signing content
istio - Connect, secure, control, and observe services.