virtual-kubelet
cni
Our great sponsors
virtual-kubelet | cni | |
---|---|---|
10 | 13 | |
4,068 | 5,293 | |
0.7% | 1.0% | |
7.0 | 7.6 | |
9 days ago | 9 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
virtual-kubelet
-
Bare-Metal Kubernetes, Part I: Talos on Hetzner
Speaking of k8s, anyone here know of ready-made solutions for getting XCode (i.e. xcodebuild) running in pods? As far as I'm aware, there are no good solutions for getting XCode running on Linux, so at the moment I'm just futzing about with a virtual-kubelet[0] implementation that spawns MacOS VMs. This works just fine, but the problem seems like such an obvious one that I expect there to be some existing solution(s) I just missed.
- Nomad vs. Kubernetes
-
Deploy on prem Kubernetes. What is the best approach paid and unpaid to deploy a cluster on premise with burst to azure/aws? The only need is the ability to have some static pods. I do have a preference for free/open source solutions.
I just stumbled upon this project a while back and don't have experience with it, so I don't know how well it works and what caveats you may face, but there's Virtual Kubelet, which aims to do just that, i.e. running a virtual Kubernetes node outside the cluster. Its Kip provider sounds like the thing you're looking for.
-
How to use the GitOps model to create, update and manage applications at the edge with KubeEdge and Argo
Kubeedge docs are light on self-justification... How does https://github.com/kubeedge/kubeedge differ from https://github.com/virtual-kubelet/virtual-kubelet or just running a regular kubelet on that edge machine?
-
Autoscaling Redis applications on Kubernetes 🚀🚀
If this sounds interesting, do check out Virtual Nodes in Azure Kubernetes Service to see how you can use them to seamlessly scale your applications to Azure Container Instances and benefit from quick provisioning of pods, and only pay per second for their execution time. The virtual nodes add-on for AKS, is based on the open source project Virtual Kubelet which is an open source Kubernetes kubelet implementation.
-
Infrastructure Engineering - Diving Deep
Use cases like these are made possible by projects like KubeEdge , K3s and Virtual Kubelets. You can read more about how they power the edge with different architectures and compromises here.
-
Evolving Container Security with Linux User Namespaces
This is a complicated question to answer.
This isn't my expertise (the cluster orchestration system), but I can answer to the best of my abilities: Titus, today is a system that sits on top of Kubernetes, and uses Kubernetes components to do its thing, but we've substituted many of the systems with our own. For example, closer to my area of knowledge, we've used our own executor / provider along with the Virtual Kubelet project (https://github.com/virtual-kubelet/virtual-kubelet) instead of Kubelet.
We're exploring where we can leverage the Kubernetes ecosystem, adapt components, or help contribute changes back that others can leverage to enable our use of more COTS components of Kubernetes.
tl;dr: We're swapping out the engines while in flight
cni
-
Kubernetes Architecture
The CNI is language-agnostic and there are many different plugins available.
-
Creating Kubernetes Cluster With CRI-O
Read more about the architecture of CRI-O here. The networking of the pod is set up through CNI, and CRI-O can be used with any CNI plugin.
-
Kubernetes traffic discovery
In generic Kubernetes network policies, there is no action field. The Calico CNI plugin (Kubernetes network plugin that implements the Container Network Interface) provides this functionality, and in particular provides logging even for allowed traffic. And this worked when we tried it in our test clusters and in our own back end.
-
Minikube now supports rootless podman driver for running Kubernetes
um, they aren't missing anything (but see below). they are k8s.
so if you want to get the genuine original mainline experience you go to the project's github repo, they have releases, and mention that the detailed changelog has links to the binaries. yeey. (https://github.com/kubernetes/kubernetes/blob/master/CHANGEL... .. the client is the kubectl binary, the server has the control plane components the node binaries have the worker node stuff), you then have the option to set those up according to the documentation (generate TLS certs, specify the IP address range for pods (containers), install dependencies like etcd, and a CNI compatible container network layer provider -- if you have setup overlay networking eg. VXLAN or geneve or something fancy with openvswitch's OVN -- then the reference CNI plugin is probably sufficient)
at the end of this process you'll have the REST API (kube-apiserver) up and running and you can start submitting jobs (that will be persisted into etcd, eventually picked up by the scheduler control loop that calculates what should run where and persists it back to etcd, then a control loop on a particular worker will notice that something new is assigned to it, and it'll do the thing, allocate a pod, call CNI to allocate IP, etc.)
of course if you don't want to do all this by hand you can use a distribution that helps you with setup.
microk8s is a low-memory low-IO k8s distro by Canonical (Ubuntu folks) and they run dqlite (distributed sqlite) instead of etcd (to lower I/O and memory requirements), many people don't like it because it uses snaps
k3s is started by Rancher folks (and mostly still developed by them?),
there's k0s (for bare metal ... I have no idea what that means though), kind (kubernetes in docker), there's also k3d (k3s in docker)
these distributions work by consuming/wrapping the k8s components as go libraries - https://github.com/kubernetes/kubernetes/blob/master/staging...
...
then there's the whole zoo of various k8s plugins/addons/tools for networking (CNI - https://github.com/containernetworking/cni#3rd-party-plugins), storage (CSI - https://kubernetes-csi.github.io/docs/drivers.html), helm for package management, a ton of security-related things that try to spot errors in all this circus ... and so on.
-
How to install Weave's Ignite for Firecracker VMs with simple script
#! /usr/bin/bash # Update apt-get repository and install dependencies apt-get update && apt-get install -y --no-install-recommends dmsetup openssh-client git binutils # Install containerd if it's not present -- prevents breaking docker-ce installations which containerd || apt-get install -y --no-install-recommends containerd # Installing CNI # Current version from https://github.com/containernetworking/cni/releases export CNI_VERSION=v1.0.1 ARCH=$([ "$(uname -m)" = "x86_64" ] && echo amd64 || echo arm64) export ARCH sudo mkdir -p /opt/cni/bin curl -sSL "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-linux-${ARCH}-${CNI_VERSION}.tgz" | sudo tar -xz -C /opt/cni/bin # Installing Ignite # Get the current version from https://github.com/weaveworks/ignite/releases export VERSION=v0.10.0 GOARCH=$(go env GOARCH 2>/dev/null || echo "amd64") export GOARCH for binary in ignite ignited; do echo "Installing ${binary}..." curl -sfLo ${binary} "https://github.com/weaveworks/ignite/releases/download/${VERSION}/${binary}-${GOARCH}" chmod +x ${binary} sudo mv ${binary} /usr/local/bin done # Check if the installation was successful ignite version
-
Solving Four Kubernetes Networking Challenges
The Container Network Interface (CNI) includes a specification for writing network plugins to configure network interfaces. This allows you to create overlay networks that satisfy Pod-to-Pod communication requirements.
-
k8s-the-hard-way
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: runc, container networking plugins, containerd, kubelet, and kube-proxy.
-
Kubernetes Network Policies: A Practitioner's Guide
CNI type plugins follow the Container Network Interface spec and are used by the community to create advanced featured plugins. On the other hand, Kubenet utilizes bridge and host-local CNI plugins and has basic features.
- The Sisyphean Task of DNS Client Config on Linux
-
Infrastructure Engineering - Diving Deep
CNI (Container Networking Interface) is a standard which helps establish interoperability between multiple networking solutions again avoiding the need to have in-tree plugins within the core and separating container networking and execution. There are a lot of plugins and runtimes which support CNI today.
What are some alternatives?
CoreDNS - CoreDNS is a DNS server that chains plugins
kubeedge - Kubernetes Native Edge Computing Framework (project under CNCF)
kubevirt - Kubernetes Virtualization API and runtime in order to define and manage virtual machines.
kubefed - Kubernetes Cluster Federation
cri-api - Container Runtime Interface (CRI) – a plugin interface which enables kubelet to use a wide variety of container runtimes.
containerlab - container-based networking labs
containerd - An open and reliable container runtime
k8s-the-hard-way
smi-spec - Service Mesh Interface
keda - KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
runc - CLI tool for spawning and running containers according to the OCI specification
cri-tools - CLI and validation tools for Kubelet Container Runtime Interface (CRI) .