k3s
enhancements
k3s | enhancements | |
---|---|---|
7 | 58 | |
15,937 | 3,270 | |
- | 1.1% | |
9.2 | 9.7 | |
about 3 years ago | 4 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
k3s
-
Kubernetes: Multi-cluster communication with Flomesh Service Mesh (Part 2)
In this demo, we will be using k3d a lightweight wrapper to run k3s (Rancher Lab’s minimal Kubernetes distribution) in docker, to create 4 separate clusters named control-plane, cluster-1, cluster-2, and cluster-3 respectively.
-
Pipy: Protecting Kubernetes Apps from SQL Injection & XSS Attacks
To run the demo locally, we recommend k3d a lightweight wrapper to run k3s (Rancher Lab’s minimal Kubernetes distribution) in docker.
-
When a node goes down, how long should k8s wait before migrating pods to other nodes?
I've been messing around with k8s (k3s) lately, and got to the "issue" of downtime/inconsistencies caused by one of multiple workers being down, who had pods running on them. I found a couple useful parameters here that helped me reduce the time needed to redeploy the old pods on other nodes, as well as stop sending requests to the NotReady node. But that got me thinking, how long should k8s wait before doing these things? Or is there perhaps a better option for increasing avaliability?
-
Kubernetes Development Environments – A Comparison
Local Kubernetes clusters are clusters that are running on the individual computer of the developer. There are many tools that provide such an environment, such as Minikube, microk8s, k3s, or kind. While they are not all the same, their use as a development environment is quite comparable.
-
Local Cluster vs. Remote Cluster for Kubernetes-Based Development
Since the developer is the only one who has to access this cluster for development, local clusters can be a feasible solution for this purpose. Over time, several solutions have emerged that are particularly made for running Kubernetes in local environments. The most important ones are Kubernetes in Docker (kind), MicroK8s, minikube and k3s. For a comparison of these local Kubernetes options, you can look at this post.
-
Kubernetes: Virtual Clusters As Development Environments
With local Kubernetes environments such as minikube or k3s, developers can create their own Kubernetes clusters on their local computers. This often leads to developers struggling with the management and setup of these pared-down Kubernetes technologies that are also not completely realistic compared to “real-world”, cloud-based environments. The upside of this approach is that the developers have full control over their environment and can independently create it whenever they need it.
-
[Recap] The API Hangout #31
K3d - a lightweight wrapper to run k3s in docker.
enhancements
-
IBM to buy HashiCorp in $6.4B deal
> was always told early on that although they supported vault on kubernetes via a helm chart, they did not recommend using it on anything but EC2 instances (because of "security" which never really made sense their reasoning).
The reasoning is basically that there are some security and isolation guarantees you don't get in Kubernetes that you do get on bare metal or (to a somewhat lesser extent) in VMs.
In particular for Kubernetes, Vault wants to run as a non-root user and set the IPC_LOCK capability when it starts to prevent its memory from being swapped to disk. While in Docker you can directly enable this by adding capabilities when you launch the container, Kubernetes has an issue because of the way it handles non-root container users specified in a pod manifest, detailed in a (long-dormant) KEP: https://github.com/kubernetes/enhancements/blob/master/keps/... (tl;dr: Kubernetes runs the container process as root, with the specified capabilities added, but then switches it to the non-root UID, which causes the explicitly-added capabilities to be dropped).
You can work around this by rebuilding the container and setting the capability directly on the binary, but the upstream build of the binary and the one in the container image don't come with that set (because the user should set it at runtime if running the container image directly, and the systemd unit sets it via systemd if running as a systemd service, so there's no need to do that except for working around Kubernetes' ambient-capability issue).
> It always surprised me how these conversations went. "Well we don't really recommend kubernetes so we won't support (feature)."
-
Exploring cgroups v2 and MemoryQoS With EKS and Bottlerocket
0 is not the request we've defined. And that makes sense. Memory QoS has been in alpha since Kubernetes 1.22 (August 2021) and according to the KEP data was still in alpha as of 1.27.
-
Jenkins Agents On Kubernetes
Note: There's actually a Structured Authentication Config established via KEP-3331. It's in v1.28 as a feature flag gated option and removes the limitation of only having one OIDC provider. I may look into doing an article on it, but for now I'll deal with the issue in a manner that should work even with a bit older versions versions of Kubernetes.
-
Isint release cycle becoming a bit crazy with monthly releases and deprecations ?
Kubernetes supports a skew policy of n+2 between API server and kubelet. This means if your CP and DP are both on 1.20, you could upgrade your control plane twice (1.20 -> 1.21 -> 1.22) before you need to upgrade your data plane. And when it comes time to upgrade your data plane you can jump from 1.20 to 1.22 to minimize update churn. In the future, this skew will be opened to n+3 https://github.com/kubernetes/enhancements/tree/master/keps/sig-architecture/3935-oldest-node-newest-control-plane
-
Kubernetes SidecarContainers feature is merged
The KEP (Kubernetes Enhancement Proposal) is linked to in the PR [1]. From the summary:
> Sidecar containers are a new type of containers that start among the Init containers, run through the lifecycle of the Pod and don’t block pod termination. Kubelet makes a best effort to keep them alive and running while other containers are running.
[1] https://github.com/kubernetes/enhancements/tree/master/keps/...
-
What's there in K8s 1.27
This is where the new feature of mutable scheduling directives for jobs comes into play. This feature enables the updating of a job's scheduling directives before it begins. Essentially, it allows custom queue controllers to influence pod placement without needing to directly handle the assignment of pods to nodes themselves. To learn more about this check out the Kubernetes Enhancement Proposal 2926.
-
Dependencies between Services
What your asking is a (vanilla) Kubernetes non-goal, others have mentioned fluxcd and other add ons that provide primitives for dependency aware deployments. The problem space is so large, that it's unreasonable to to address these concerns in Kubernetes itself, instead, make it extensible... Look at this KEP for example: https://github.com/kubernetes/enhancements/issues/753 Sidecar containers have existed, and been named as such since WAY before that KEP's inception, defining what these things should and shouldn't do is largely arbitrary. Aka: your use-case is niche, if you don't like the behavior, use flux or argo, or write something yourself.
- When you learn the Sidecar Container KEP got dropped from the Kubernets release. Again.
-
Kubernetes 1.27 will be out next week! - Learn what's new and what's deprecated - Group volume snapshots - Pod resource updates - kubectl subcommands … And more!
If further interested, I may recommend checking out the KEP. I love how they document the decision making, and all these edge cases :).
-
How can I force assign an IP to my Load Balancer ingress in “status.loadBalancer”?
See https://kubernetes.io/docs/reference/kubectl/conventions/#subresources and https://github.com/kubernetes/enhancements/issues/2590
What are some alternatives?
minikube - Run Kubernetes locally
kubeconform - A FAST Kubernetes manifests validator, with support for Custom Resources!
devspace-plugin-loft - Loft Plugin for DevSpace - adds commands like `devspace create space` or `devspace create vcluster` to DevSpace
spark-operator - Kubernetes operator for managing the lifecycle of Apache Spark applications on Kubernetes.
cilium - eBPF-based Networking, Security, and Observability
kubernetes-json-schema - Schemas for every version of every object in every version of Kubernetes
multi-tenancy - A working place for multi-tenancy related proposals and prototypes.
klipper-lb - Embedded service load balancer in Klipper
kubefwd - Bulk port forwarding Kubernetes services for local development.
Hey - HTTP load generator, ApacheBench (ab) replacement
fsm - Lightweight service mesh for Kubernetes East-West and North-South traffic management, uses ebpf for layer4 and pipy proxy for layer7 traffic management, support multi cluster network.
connaisseur - An admission controller that integrates Container Image Signature Verification into a Kubernetes cluster