enhancements
metallb
enhancements | metallb | |
---|---|---|
60 | 78 | |
3,276 | 6,639 | |
1.3% | 1.1% | |
9.7 | 9.4 | |
1 day ago | 9 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
enhancements
-
Design Docs at Google
Thanks for these links!
I picked out one at random just to check if my skeptical reaction is fair: https://github.com/kubernetes/enhancements/tree/master/keps/...
- OK, this is actually a really good and useful doc!
- However, it's not an up-front design doc, it has clearly been written after the bulk of the work has been done, to explain and justify rolling out a big change. (See the "implementation history" timeline: https://github.com/kubernetes/enhancements/tree/master/keps/...)
- It looks like the template wasn't very useful; most of the required sections are marked "N/A", and there are comments like The best test for work like this is, more or less, "did it work?"
-
IBM to buy HashiCorp in $6.4B deal
> was always told early on that although they supported vault on kubernetes via a helm chart, they did not recommend using it on anything but EC2 instances (because of "security" which never really made sense their reasoning).
The reasoning is basically that there are some security and isolation guarantees you don't get in Kubernetes that you do get on bare metal or (to a somewhat lesser extent) in VMs.
In particular for Kubernetes, Vault wants to run as a non-root user and set the IPC_LOCK capability when it starts to prevent its memory from being swapped to disk. While in Docker you can directly enable this by adding capabilities when you launch the container, Kubernetes has an issue because of the way it handles non-root container users specified in a pod manifest, detailed in a (long-dormant) KEP: https://github.com/kubernetes/enhancements/blob/master/keps/... (tl;dr: Kubernetes runs the container process as root, with the specified capabilities added, but then switches it to the non-root UID, which causes the explicitly-added capabilities to be dropped).
You can work around this by rebuilding the container and setting the capability directly on the binary, but the upstream build of the binary and the one in the container image don't come with that set (because the user should set it at runtime if running the container image directly, and the systemd unit sets it via systemd if running as a systemd service, so there's no need to do that except for working around Kubernetes' ambient-capability issue).
> It always surprised me how these conversations went. "Well we don't really recommend kubernetes so we won't support (feature)."
-
Exploring cgroups v2 and MemoryQoS With EKS and Bottlerocket
0 is not the request we've defined. And that makes sense. Memory QoS has been in alpha since Kubernetes 1.22 (August 2021) and according to the KEP data was still in alpha as of 1.27.
-
Jenkins Agents On Kubernetes
Note: There's actually a Structured Authentication Config established via KEP-3331. It's in v1.28 as a feature flag gated option and removes the limitation of only having one OIDC provider. I may look into doing an article on it, but for now I'll deal with the issue in a manner that should work even with a bit older versions versions of Kubernetes.
-
Isint release cycle becoming a bit crazy with monthly releases and deprecations ?
Kubernetes supports a skew policy of n+2 between API server and kubelet. This means if your CP and DP are both on 1.20, you could upgrade your control plane twice (1.20 -> 1.21 -> 1.22) before you need to upgrade your data plane. And when it comes time to upgrade your data plane you can jump from 1.20 to 1.22 to minimize update churn. In the future, this skew will be opened to n+3 https://github.com/kubernetes/enhancements/tree/master/keps/sig-architecture/3935-oldest-node-newest-control-plane
-
Kubernetes SidecarContainers feature is merged
The KEP (Kubernetes Enhancement Proposal) is linked to in the PR [1]. From the summary:
> Sidecar containers are a new type of containers that start among the Init containers, run through the lifecycle of the Pod and don’t block pod termination. Kubelet makes a best effort to keep them alive and running while other containers are running.
[1] https://github.com/kubernetes/enhancements/tree/master/keps/...
-
What's there in K8s 1.27
This is where the new feature of mutable scheduling directives for jobs comes into play. This feature enables the updating of a job's scheduling directives before it begins. Essentially, it allows custom queue controllers to influence pod placement without needing to directly handle the assignment of pods to nodes themselves. To learn more about this check out the Kubernetes Enhancement Proposal 2926.
-
Dependencies between Services
What your asking is a (vanilla) Kubernetes non-goal, others have mentioned fluxcd and other add ons that provide primitives for dependency aware deployments. The problem space is so large, that it's unreasonable to to address these concerns in Kubernetes itself, instead, make it extensible... Look at this KEP for example: https://github.com/kubernetes/enhancements/issues/753 Sidecar containers have existed, and been named as such since WAY before that KEP's inception, defining what these things should and shouldn't do is largely arbitrary. Aka: your use-case is niche, if you don't like the behavior, use flux or argo, or write something yourself.
- When you learn the Sidecar Container KEP got dropped from the Kubernets release. Again.
-
Kubernetes 1.27 will be out next week! - Learn what's new and what's deprecated - Group volume snapshots - Pod resource updates - kubectl subcommands … And more!
If further interested, I may recommend checking out the KEP. I love how they document the decision making, and all these edge cases :).
metallb
-
Self hosted kubernetes
Hey guys, I want to share a guide I’m pretty proud of which is talking about setting up kubernetes which leverages https://kubespray.io/#/ and https://metallb.universe.tf/ so you can host this yourself most people when spinning up kubernetes opt for k3s or get stuck with all the options or unable to setup the external ips for their services so these tools will eliminate the problem.
- Deploy web app in port 80 using kubernetes
-
How to load balance highly available bare metal Kubernetes cluster control plane nodes?
Have a closer look at MetallLB.
-
Trouble with RKE2 HA Setup: Part 2
To avoid that, you can use a combination of haproxy and keepalived, an enterprise grade load balancer like the one from F5 or Citrix. Besides that you can also work with https://kube-vip.io or https://metallb.universe.tf.
-
Kubernetes and feeling defeated
Not sure if klipper is usable in a cluster with multiple nodes, as it binds to one port only. You may want to use MetalLB instead: https://metallb.universe.tf/
-
Cool stuff to deploy for a project ideas
Then deploy MetalLB https://metallb.universe.tf/
- Load balance ingress for baremetal
-
Own kubernetes cluster
What issue do you see with the load balancer? For self hosted clusters, one can use MetalLB for example to have such single outfacing IP which will failover to another node keeping the same IP if a node dies.
-
PaperLB: A Kubernetes Network Load Balancer Implementation
Quoting from their docs:
-
libvirt-k8s-provisioner - Ansible and terraform to build a cluster from scratch in less than 10 minutes ok KVM - Updated for 1.26
metalLB to manage bare-metal LoadBalancer services - WIP - Only L2 configuration can be set-up via playbook.
What are some alternatives?
kubeconform - A FAST Kubernetes manifests validator, with support for Custom Resources!
kube-vip - Kubernetes Control Plane Virtual IP and Load-Balancer
spark-operator - Kubernetes operator for managing the lifecycle of Apache Spark applications on Kubernetes.
calico - Cloud native networking and network security
kubernetes-json-schema - Schemas for every version of every object in every version of Kubernetes
ingress-nginx - Ingress-NGINX Controller for Kubernetes
klipper-lb - Embedded service load balancer in Klipper
external-dns - Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services
Hey - HTTP load generator, ApacheBench (ab) replacement
cert-manager - Automatically provision and manage TLS certificates in Kubernetes
connaisseur - An admission controller that integrates Container Image Signature Verification into a Kubernetes cluster
rancher - Complete container management platform