amazon-eks-ami
conduit
amazon-eks-ami | conduit | |
---|---|---|
19 | 33 | |
2,351 | 10,376 | |
0.8% | 0.9% | |
9.2 | 9.9 | |
4 days ago | about 11 hours ago | |
Shell | Go | |
MIT No Attribution | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
amazon-eks-ami
-
[Request for opinion] : CPU limits in the K8s world
Careful assuming system reserved will be present. Last I checked, AWS EKS does not have system reserved resources for the kubelet by default and as a result, pods can starve those for resources (e.g., https://github.com/awslabs/amazon-eks-ami/issues/79). This is of course more important for memory, but could impact CPU as well.
-
Compile Linux Kernel 6.x on AL2? 😎
For example, this is available for AL2: https://github.com/awslabs/amazon-eks-ami
-
Hands-on lab for studying the EKS, which scenarios I should learn?
I found this document that lists the pod limits per node size. I suspect you will want to consider larger worker nodes or you will very quickly be unable to schedule additional workloads.
-
k3s on AWS,does it make sense?
source
- EKS Worker Nodes on RHEL 8?
-
Five Rookie Mistakes with Kubernetes on AWS. Which were yours?
Issue 1 is a known issue due to memory reservation being to low, see e.g. https://github.com/awslabs/amazon-eks-ami/issues/1145
-
EKS: Shoudnt nodes autoscaling group take pods limit into consideration?
No, the new node is added if there are not enough resiurces to start a new pod. So if you have many pods with small resource usage you can hit the pod per node limit, on eks you have a max number of pods depending on the instance type - https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt You can incerase that limit : https://docs.aws.amazon.com/eks/latest/userguide/cni-increase-ip-addresses.html
-
Blog: KWOK: Kubernetes WithOut Kubelet
# of pods are essentially capped by the worker node choice.
below excerpt from: https://github.com/awslabs/amazon-eks-ami/blob/master/files/...
# Mapping is calculated from AWS EC2 API using the following formula:
-
Tips on working with EKS
See also: EKS nodes lose readiness when containers exhaust memory
-
Best managed kubernetes platform
So it manifests itself in this way: your pod is scheduled but remains pending forever. You check the logs and you see that it's complaining that the an IP address. Ultimately, if you check here, you see the maximum number of pods that can be scheduled on any underlying ec2 instance, even if you have remaining IPs in your subnet. I found this to be one of the most poorly understood phenomena in EKS. Even those who claimed to "crack" it and wrote fancy blog posts about it fundamentally got it wrong. AFAIK this document reflects the official AWS guide on how to mitigate this.
conduit
-
Optimal JMX Exposure Strategy for Kubernetes Multi-Node Architecture
Leverage a service mesh like Istio or Linkerd to manage communication between microservices within the Kubernetes cluster. These service meshes can be configured to intercept JMX traffic and enforce access control policies. Benefits:
-
Linkerd no longer shipping open source, stable releases
Looks like CNCF waved them through Graduation anyway, let's look at policies from July 28, 2021 when they were deemed "Graduated"
All maintainers of the LinkerD project had @boyant.io email addresses. [0] They do list 4 other members of a "Steering Committee", but LinkerD's GOVERNANCE.md gives all of the power to maintainers: [1]
> Ideally, all project decisions are resolved by maintainer consensus. If this is not possible, maintainers may call a vote. The voting process is a simple majority in which each maintainer receives one vote.
And CNCF Graduation policy says a project must "Have committers from at least two organizations" [2]. So it appears that the CNCF accepted the "Steering Committee" as an acceptable 2nd committer, even though the Governance policy still gave the maintainers all of the power.
I would like to know if the Steering Committee voted to remove stable releases from an un-biased position acting in the best interest of the project, or if they were simply ignored or not even advised on the decision.
I'm all for Boyant doing what they need to do to make money and survive as a Company. But at that point my opinion is that they should withdraw the project from the CNCF and stop pretending like the foundation has any influence on the project's governance.
[0] https://github.com/linkerd/linkerd2/blob/489ca1e3189b6a5289d...
-
Ultimate EKS Baseline Cluster: Part 1 - Provision EKS
From here, we can explore other developments and tutorials on Kubernetes, such as o11y or observability (PLG, ELK, ELF, TICK, Jaeger, Pyroscope), service mesh (Linkerd, Istio, NSM, Consul Connect, Cillium), and progressive delivery (ArgoCD, FluxCD, Spinnaker).
-
Istio moved to CNCF Graduation stage
https://linkerd.io/ is a much lighter-weight alternative but you do still get some of the fancy things like mtls without needing any manual configuration. Install it, label your namespaces, and let it do it's thing!
-
Custom Authorization
Would it be possible to create a custom extension with the code that authorize traffic based on my custom access token?
-
API release strategies with API Gateway
Open source API Gateway (Apache APISIX and Traefik), Service Mesh (Istio and Linkerd) solutions are capable of doing traffic splitting and implementing functionalities like Canary Release and Blue-Green deployment. With canary testing, you can make a critical examination of a new release of an API by selecting only a small portion of your user base. We will cover the canary release next section.
-
GKE with Consul Service Mesh
I have experimented with other service meshes and I was able to get up to speed quickly: Linkerd = 1 day, Istio = 3 days, NGINX Service Mesh = 5 days, but Consul Connect service mesh took at least 11 days to get off the ground. This is by far the most complex solution available.
-
How is a service mesh implemented on low level?
https://github.com/linkerd/linkerd2 (random example)
- Kubernetes operator written in rust
-
What is a service mesh?
Out of the number of service mesh solutions that exist, the most popular open source ones are: Linkerd, Istio, and Consul. Here at Koyeb, we are using Kuma.
What are some alternatives?
calico - Cloud native networking and network security
Zone of Control - ⬡ Zone of Control is a hexagonal turn-based strategy game written in Rust. [DISCONTINUED]
amazon-eks-pod-identity-webhook - Amazon EKS Pod Identity Webhook
Parallel
amazon-vpc-cni-k8s - Networking plugin repository for pod networking in Kubernetes using Elastic Network Interfaces on AWS
Fractalide - Reusable Reproducible Composable Software
prometheus - The Prometheus monitoring system and time series database.
keda - KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
envoy - Cloud-native high-performance edge/middle/service proxy
istio - Connect, secure, control, and observe services.
skopeo - Work with remote images registries - retrieving information, images, signing content
traefik - The Cloud Native Application Proxy