cni
cilium
cni | cilium | |
---|---|---|
13 | 24 | |
5,307 | 18,572 | |
0.6% | 1.3% | |
7.7 | 10.0 | |
11 days ago | 6 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
cni
-
Kubernetes Architecture
The CNI is language-agnostic and there are many different plugins available.
-
Creating Kubernetes Cluster With CRI-O
Read more about the architecture of CRI-O here. The networking of the pod is set up through CNI, and CRI-O can be used with any CNI plugin.
-
Kubernetes traffic discovery
In generic Kubernetes network policies, there is no action field. The Calico CNI plugin (Kubernetes network plugin that implements the Container Network Interface) provides this functionality, and in particular provides logging even for allowed traffic. And this worked when we tried it in our test clusters and in our own back end.
-
Docker Container to get IP by external DHCP
There is a CNI spec: https://github.com/containernetworking/cni/blob/main/SPEC.md which allows for custom network plugins. Thats how AWS/EKS nodes are able to assign VPC routable IPs to containers running on them.
-
Minikube now supports rootless podman driver for running Kubernetes
um, they aren't missing anything (but see below). they are k8s.
so if you want to get the genuine original mainline experience you go to the project's github repo, they have releases, and mention that the detailed changelog has links to the binaries. yeey. (https://github.com/kubernetes/kubernetes/blob/master/CHANGEL... .. the client is the kubectl binary, the server has the control plane components the node binaries have the worker node stuff), you then have the option to set those up according to the documentation (generate TLS certs, specify the IP address range for pods (containers), install dependencies like etcd, and a CNI compatible container network layer provider -- if you have setup overlay networking eg. VXLAN or geneve or something fancy with openvswitch's OVN -- then the reference CNI plugin is probably sufficient)
at the end of this process you'll have the REST API (kube-apiserver) up and running and you can start submitting jobs (that will be persisted into etcd, eventually picked up by the scheduler control loop that calculates what should run where and persists it back to etcd, then a control loop on a particular worker will notice that something new is assigned to it, and it'll do the thing, allocate a pod, call CNI to allocate IP, etc.)
of course if you don't want to do all this by hand you can use a distribution that helps you with setup.
microk8s is a low-memory low-IO k8s distro by Canonical (Ubuntu folks) and they run dqlite (distributed sqlite) instead of etcd (to lower I/O and memory requirements), many people don't like it because it uses snaps
k3s is started by Rancher folks (and mostly still developed by them?),
there's k0s (for bare metal ... I have no idea what that means though), kind (kubernetes in docker), there's also k3d (k3s in docker)
these distributions work by consuming/wrapping the k8s components as go libraries - https://github.com/kubernetes/kubernetes/blob/master/staging...
...
then there's the whole zoo of various k8s plugins/addons/tools for networking (CNI - https://github.com/containernetworking/cni#3rd-party-plugins), storage (CSI - https://kubernetes-csi.github.io/docs/drivers.html), helm for package management, a ton of security-related things that try to spot errors in all this circus ... and so on.
-
How to install Weave's Ignite for Firecracker VMs with simple script
#! /usr/bin/bash # Update apt-get repository and install dependencies apt-get update && apt-get install -y --no-install-recommends dmsetup openssh-client git binutils # Install containerd if it's not present -- prevents breaking docker-ce installations which containerd || apt-get install -y --no-install-recommends containerd # Installing CNI # Current version from https://github.com/containernetworking/cni/releases export CNI_VERSION=v1.0.1 ARCH=$([ "$(uname -m)" = "x86_64" ] && echo amd64 || echo arm64) export ARCH sudo mkdir -p /opt/cni/bin curl -sSL "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-linux-${ARCH}-${CNI_VERSION}.tgz" | sudo tar -xz -C /opt/cni/bin # Installing Ignite # Get the current version from https://github.com/weaveworks/ignite/releases export VERSION=v0.10.0 GOARCH=$(go env GOARCH 2>/dev/null || echo "amd64") export GOARCH for binary in ignite ignited; do echo "Installing ${binary}..." curl -sfLo ${binary} "https://github.com/weaveworks/ignite/releases/download/${VERSION}/${binary}-${GOARCH}" chmod +x ${binary} sudo mv ${binary} /usr/local/bin done # Check if the installation was successful ignite version
-
Solving Four Kubernetes Networking Challenges
The Container Network Interface (CNI) includes a specification for writing network plugins to configure network interfaces. This allows you to create overlay networks that satisfy Pod-to-Pod communication requirements.
-
k8s-the-hard-way
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: runc, container networking plugins, containerd, kubelet, and kube-proxy.
-
Kubernetes Network Policies: A Practitioner's Guide
CNI type plugins follow the Container Network Interface spec and are used by the community to create advanced featured plugins. On the other hand, Kubenet utilizes bridge and host-local CNI plugins and has basic features.
- Release ๐ CNI v1.0.1 ๐ ยท containernetworking/cni
cilium
-
Cisco to Acquire Cloud Native Networking and Security Leader Isovalent
They would have had to add a few externals to get to Graduated but it's definitely a minority:
https://github.com/cilium/cilium/blob/main/MAINTAINERS.md
-
An opinionated template for deploying a single k3s cluster with Ansible backed by Flux, SOPS, GitHub Actions, Renovate, Cilium, Cloudflare and more!
Next-gen networking thanks to Cilium
-
Route Pod-Traffic Through WireGuard w/ Cilium
Hello there, I recently have the need to proxy my pod traffic through WireGuard. I initially have my eyes on https://github.com/angelnu/pod-gateway but I just couldn't get it working. It turns out that Cilium made a CVE patch couple years ago that basically nuked ability to do inter-pod encapsulated traffic (https://github.com/cilium/cilium/issues/15991). I wonder if there is any other way that can let me do this without switching out of Cilium? Thank you guys in advance :)
-
Creating Kubernetes Cluster With CRI-O
I have used Cilium as CNI and installing it with helm.
-
Need advice on K3s cluster setup
I'm using the default RaspiOS Lite 64bits and as highlighted in this issue, the RaspiOS kernel does not support CONFIG_ARM64_VA_BITS_48, which makes cilium-envoy to fail building. As solution, I was told to use either Ubuntu as base OS or Traefik Ingress Controller, which is not configured in K3s.
- MetalLB or Cilium?
-
Ask r/kubernetes: What are you working on this week?
Working on integrating cilium and loxilb as a hobby k8s project. Both are eBPF based and will be interesting to see what will be the final outcome.
-
Saying Goodbye to Ingress: Embracing the Future of Kubernetes Traffic Management with Gateway API and Cilium
Particularly in Cilium, Gateway API is very proof-of-concept. So much so that you can't even change the type of the underlying service (or anything else about the generated object) yet.
-
Isn't Istio Ambient mesh a fantastic step to simplify operating istio? Here's a video explaining the architecture!
Authentication using mTLS was later merged into cilium (https://github.com/cilium/cilium/pull/24263). It uses mTLS between cilium agents to authorize flows, but do note that the mTLS auth is de-coupled from the datapath transport (i.e. you need to configure cilium to use ipsec or wireguard, as otherwise traffic won't be encrypted). As a consequence, there are some gaps in the implementation right now, like packet drops. see https://github.com/cilium/cilium/issues/23808
-
libvirt-k8s-provisioner - Ansible and terraform to build a cluster from scratch in less than 10 minutes ok KVM - Updated for 1.26
network plugin to be used, based on the documentation. (Project Calico ,Flannel, Cilium )
What are some alternatives?
CoreDNS - CoreDNS is a DNS server that chains plugins
antrea - Kubernetes networking based on Open vSwitch
containerlab - container-based networking labs
multus-cni - A CNI meta-plugin for multi-homed pods in Kubernetes
cri-api - Container Runtime Interface (CRI) โ a plugin interface which enables kubelet to use a wide variety of container runtimes.
kilo - Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes (k8s + wg = kg)
containerd - An open and reliable container runtime
aws-load-balancer-controller - A Kubernetes controller for Elastic Load Balancers
k8s-the-hard-way
pixie - Instant Kubernetes-Native Application Observability
virtual-kubelet - Virtual Kubelet is an open source Kubernetes kubelet implementation.
sriov-network-device-plugin - SRIOV network device plugin for Kubernetes