enhancements
kubernetes-json-schema
enhancements | kubernetes-json-schema | |
---|---|---|
63 | 4 | |
3,457 | 304 | |
0.7% | 0.0% | |
9.8 | 0.0 | |
8 days ago | over 1 year ago | |
Go | ||
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
enhancements
-
A skeptic's first contact with Kubernetes
The motivation is more the latter, but it's not at all clear the proposed removal of the embedded kustomize will proceed, given the compatibility implications. See discussion at https://github.com/kubernetes/enhancements/issues/4706#issue... and following.
-
Debugging Distroless Images with kubectl and cdebug
(I do see there are some proposed enhancements related to profiles that might help here)
-
Design Docs at Google
Thanks for these links!
I picked out one at random just to check if my skeptical reaction is fair: https://github.com/kubernetes/enhancements/tree/master/keps/...
- OK, this is actually a really good and useful doc!
- However, it's not an up-front design doc, it has clearly been written after the bulk of the work has been done, to explain and justify rolling out a big change. (See the "implementation history" timeline: https://github.com/kubernetes/enhancements/tree/master/keps/...)
- It looks like the template wasn't very useful; most of the required sections are marked "N/A", and there are comments like The best test for work like this is, more or less, "did it work?"
-
IBM to buy HashiCorp in $6.4B deal
> was always told early on that although they supported vault on kubernetes via a helm chart, they did not recommend using it on anything but EC2 instances (because of "security" which never really made sense their reasoning).
The reasoning is basically that there are some security and isolation guarantees you don't get in Kubernetes that you do get on bare metal or (to a somewhat lesser extent) in VMs.
In particular for Kubernetes, Vault wants to run as a non-root user and set the IPC_LOCK capability when it starts to prevent its memory from being swapped to disk. While in Docker you can directly enable this by adding capabilities when you launch the container, Kubernetes has an issue because of the way it handles non-root container users specified in a pod manifest, detailed in a (long-dormant) KEP: https://github.com/kubernetes/enhancements/blob/master/keps/... (tl;dr: Kubernetes runs the container process as root, with the specified capabilities added, but then switches it to the non-root UID, which causes the explicitly-added capabilities to be dropped).
You can work around this by rebuilding the container and setting the capability directly on the binary, but the upstream build of the binary and the one in the container image don't come with that set (because the user should set it at runtime if running the container image directly, and the systemd unit sets it via systemd if running as a systemd service, so there's no need to do that except for working around Kubernetes' ambient-capability issue).
> It always surprised me how these conversations went. "Well we don't really recommend kubernetes so we won't support (feature)."
-
Exploring cgroups v2 and MemoryQoS With EKS and Bottlerocket
0 is not the request we've defined. And that makes sense. Memory QoS has been in alpha since Kubernetes 1.22 (August 2021) and according to the KEP data was still in alpha as of 1.27.
-
Jenkins Agents On Kubernetes
Note: There's actually a Structured Authentication Config established via KEP-3331. It's in v1.28 as a feature flag gated option and removes the limitation of only having one OIDC provider. I may look into doing an article on it, but for now I'll deal with the issue in a manner that should work even with a bit older versions versions of Kubernetes.
-
Isint release cycle becoming a bit crazy with monthly releases and deprecations ?
Kubernetes supports a skew policy of n+2 between API server and kubelet. This means if your CP and DP are both on 1.20, you could upgrade your control plane twice (1.20 -> 1.21 -> 1.22) before you need to upgrade your data plane. And when it comes time to upgrade your data plane you can jump from 1.20 to 1.22 to minimize update churn. In the future, this skew will be opened to n+3 https://github.com/kubernetes/enhancements/tree/master/keps/sig-architecture/3935-oldest-node-newest-control-plane
-
Kubernetes SidecarContainers feature is merged
The KEP (Kubernetes Enhancement Proposal) is linked to in the PR [1]. From the summary:
> Sidecar containers are a new type of containers that start among the Init containers, run through the lifecycle of the Pod and don’t block pod termination. Kubelet makes a best effort to keep them alive and running while other containers are running.
[1] https://github.com/kubernetes/enhancements/tree/master/keps/...
-
What's there in K8s 1.27
This is where the new feature of mutable scheduling directives for jobs comes into play. This feature enables the updating of a job's scheduling directives before it begins. Essentially, it allows custom queue controllers to influence pod placement without needing to directly handle the assignment of pods to nodes themselves. To learn more about this check out the Kubernetes Enhancement Proposal 2926.
-
Dependencies between Services
What your asking is a (vanilla) Kubernetes non-goal, others have mentioned fluxcd and other add ons that provide primitives for dependency aware deployments. The problem space is so large, that it's unreasonable to to address these concerns in Kubernetes itself, instead, make it extensible... Look at this KEP for example: https://github.com/kubernetes/enhancements/issues/753 Sidecar containers have existed, and been named as such since WAY before that KEP's inception, defining what these things should and shouldn't do is largely arbitrary. Aka: your use-case is niche, if you don't like the behavior, use flux or argo, or write something yourself.
kubernetes-json-schema
-
WebAssembly: Docker Without Containers
Hey, so I thought I remembered your username. This isn’t the first interaction we’ve had, or I’ve seen you have, that follows this similar pattern. In fact it’s the third example from you under this post!
It’s not a particularly pleasant experience to discuss anything with you, as after you make a particularly vapid and usually ice-cold take that is rebuffed, you seem to just try to make snarky replies rather than engage.
Understand that if you post your takes here they may be discussed and challenged, and if you don’t want this then I would refrain from initially commenting.
In response to your comment: They do. All Kubernetes resources are typed with JSON-schema definitions. Because of course they are, how else would kubernetes validate anything. https://kubernetesjsonschema.dev/
Anyone who’s used k8s at all knows this, if only from the error messages. From this you get autocompletion and a wide ecosystem of gui configuration tools. I like lens (https://k8slens.dev/).
-
Data and System Visualization Tools That Will Boost Your Productivity
To avoid spending unreasonable amount of time trying to find that one wrong indent, I recommend you use schema validation and let your IDE do all the work. You can use validation schemas from https://schemastore.org/json or custom schemas such as these for Kubernetes to validate your files. These will work both with JetBrains products (e.g. Pycharm, IntelliJ) as well as VSCode (see this guide)
-
Test manifest compatibility against version
Seems like they haven't generated v1.20+ schema. It might work if you generate the schema yourself and feed it to KUBEVAL_SCHEMA_LOCATION
-
A Deep Dive Into Kubernetes Schema Validation
Kubeval - instrumenta/kubernetes-json-schema (last commit: 133f848 on April 29, 2020)
What are some alternatives?
kubeconform - A FAST Kubernetes manifests validator, with support for Custom Resources!
spark-operator - Kubernetes operator for managing the lifecycle of Apache Spark applications on Kubernetes.
kubeval - Validate your Kubernetes configuration files, supports multiple Kubernetes versions
klipper-lb - Embedded service load balancer in Klipper
lens-resource-map-extension - Lens - The Kubernetes IDE extension that displays Kubernetes resources and their relations as a force graph.
pixie - Instant Kubernetes-Native Application Observability
kubernetes-schema-validation - resources for the blog post about Kubernetes schema validation
connaisseur - An admission controller that integrates Container Image Signature Verification into a Kubernetes cluster
kubernetes-json-schema - JSON Schemas for every version of every object in every version of Kubernetes
conftest - Write tests against structured configuration data using the Open Policy Agent Rego query language
mermaid - Generation of diagrams like flowcharts or sequence diagrams from text in a similar manner as markdown