amazon-eks-ami
distroless
Our great sponsors
amazon-eks-ami | distroless | |
---|---|---|
19 | 122 | |
2,345 | 17,749 | |
1.6% | 2.4% | |
9.2 | 9.4 | |
7 days ago | 4 days ago | |
Shell | Starlark | |
MIT No Attribution | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
amazon-eks-ami
-
[Request for opinion] : CPU limits in the K8s world
Careful assuming system reserved will be present. Last I checked, AWS EKS does not have system reserved resources for the kubelet by default and as a result, pods can starve those for resources (e.g., https://github.com/awslabs/amazon-eks-ami/issues/79). This is of course more important for memory, but could impact CPU as well.
-
Compile Linux Kernel 6.x on AL2? 😎
For example, this is available for AL2: https://github.com/awslabs/amazon-eks-ami
-
Hands-on lab for studying the EKS, which scenarios I should learn?
I found this document that lists the pod limits per node size. I suspect you will want to consider larger worker nodes or you will very quickly be unable to schedule additional workloads.
-
k3s on AWS,does it make sense?
source
- EKS Worker Nodes on RHEL 8?
-
Five Rookie Mistakes with Kubernetes on AWS. Which were yours?
Issue 1 is a known issue due to memory reservation being to low, see e.g. https://github.com/awslabs/amazon-eks-ami/issues/1145
-
EKS: Shoudnt nodes autoscaling group take pods limit into consideration?
No, the new node is added if there are not enough resiurces to start a new pod. So if you have many pods with small resource usage you can hit the pod per node limit, on eks you have a max number of pods depending on the instance type - https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt You can incerase that limit : https://docs.aws.amazon.com/eks/latest/userguide/cni-increase-ip-addresses.html
-
Blog: KWOK: Kubernetes WithOut Kubelet
# of pods are essentially capped by the worker node choice.
below excerpt from: https://github.com/awslabs/amazon-eks-ami/blob/master/files/...
# Mapping is calculated from AWS EC2 API using the following formula:
-
Tips on working with EKS
See also: EKS nodes lose readiness when containers exhaust memory
-
Best managed kubernetes platform
So it manifests itself in this way: your pod is scheduled but remains pending forever. You check the logs and you see that it's complaining that the an IP address. Ultimately, if you check here, you see the maximum number of pods that can be scheduled on any underlying ec2 instance, even if you have remaining IPs in your subnet. I found this to be one of the most poorly understood phenomena in EKS. Even those who claimed to "crack" it and wrote fancy blog posts about it fundamentally got it wrong. AFAIK this document reflects the official AWS guide on how to mitigate this.
distroless
-
Chainguard Images now available on Docker Hub
lots of questions here regarding what this product is. I guess i can provide some information for the context, from a perspective of an outside contributor.
Chainguard Images is a set of hardened container images.
They were built by the original team that brought you Google's Distroless (https://github.com/GoogleContainerTools/distroless)
However, there were few problems with Distroless:
1. distroless were based on Debian - which in turn, limited to Debian's release cadence for fixing CVE.
2. distroless is using bazelbuild, which is not exactly easy to contrib, customize, etc...
3. distroless images are hard to extend.
Chainguard built a new "undistro" OS for container workload, named Wolfi, using their OSS projects like melange (for packaging pkgs) and apko (for building images).
The idea is (from my understanding) is that
1. You don't have to rely on upstream to cut a release. Chainguard will be doing that, with lots of automation & guardrails in placed. This allow them to fix vulnerabilties extremely fast.
- Language focused Docker images, minus the operating system
-
Using Alpine can make Python Docker builds 50× slower
> If you have one image based on Ubuntu in your stack, you may as well base them all on Ubuntu, because you only need to download (and store!) the common base image once
This is only true if your infrastructure is static. If your infrastructure is highly elastic, image size has an impact on your time to scale up.
Of course, there are better choices than Alpine to optimize image size. Distroless (https://github.com/GoogleContainerTools/distroless) is a good example.
- Smaller and Safer Clojure Containers: Minimizing the Software Bill of Materials
-
Long Term Ownership of an Event-Driven System
The same as our code dependencies, container updates can include security patches and bug fixes and improvements. However, they can also include breaking changes and it is crucial you test them thoroughly before putting them into production. Wherever possible, I recommend using the distroless base image which will drastically reduce both your image size, your risk vector, and therefore your maintenance version going forward.
-
Minimizing Nuxt 3 Docker Images
# Use a large Node.js base image to build the application and name it "build" FROM node:18-alpine as build WORKDIR /app # Copy the package.json and package-lock.json files into the working directory before copying the rest of the files # This will cache the dependencies and speed up subsequent builds if the dependencies don't change COPY package*.json /app # You might want to use yarn or pnpm instead RUN npm install COPY . /app RUN npm run build # Instead of using a node:18-alpine image, we are using a distroless image. These are provided by google: https://github.com/GoogleContainerTools/distroless FROM gcr.io/distroless/nodejs:18 as prod WORKDIR /app # Copy the built application from the "build" image into the "prod" image COPY --from=build /app/.output /app/.output # Since this image only contains node.js, we do not need to specify the node command and simply pass the path to the index.mjs file! CMD ["/app/.output/server/index.mjs"]
-
Build Your Own Docker with Linux Namespaces, Cgroups, and Chroot
Lots of examples without the entire OS as other comments mention, an example would be Googles distroless[0]
[0]: https://github.com/GoogleContainerTools/distroless
-
Reddit temporarily ban subreddit and user advertising rival self-hosted platform (Lemmy)
Docker doesn't do this all the time. Distroless Docker containers are relatively common. https://github.com/GoogleContainerTools/distroless
-
Why elixir over Golang
Deployment: https://github.com/GoogleContainerTools/distroless
-
Reviews
Or use distroless image as it includes one, among others. https://github.com/GoogleContainerTools/distroless/blob/main/base/README.md
What are some alternatives?
calico - Cloud native networking and network security
iron-alpine - Hardened alpine linux baseimage for Docker.
amazon-eks-pod-identity-webhook - Amazon EKS Pod Identity Webhook
spring-boot-jib - This project is about Containerizing a Spring Boot Application With Jib
amazon-vpc-cni-k8s - Networking plugin repository for pod networking in Kubernetes using Elastic Network Interfaces on AWS
jib - 🏗 Build container images for your Java applications.
prometheus - The Prometheus monitoring system and time series database.
podman - Podman: A tool for managing OCI containers and pods.
envoy - Cloud-native high-performance edge/middle/service proxy
dockerfiles - Various Dockerfiles I use on the desktop and on servers.
skopeo - Work with remote images registries - retrieving information, images, signing content
docker-alpine - Official Alpine Linux Docker image. Win at minimalism!