cni VS smi-spec

Compare cni vs smi-spec and see what are their differences.

cni

Container Network Interface - networking for Linux containers (by containernetworking)

smi-spec

Service Mesh Interface (by servicemeshinterface)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
cni smi-spec
13 12
5,307 1,047
1.3% -
7.7 2.7
8 days ago 6 months ago
Go Makefile
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

cni

Posts with mentions or reviews of cni. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-08-15.
  • Kubernetes Architecture
    4 projects | dev.to | 15 Aug 2023
    The CNI is language-agnostic and there are many different plugins available.
  • Creating Kubernetes Cluster With CRI-O
    7 projects | dev.to | 30 Jul 2023
    Read more about the architecture of CRI-O here. The networking of the pod is set up through CNI, and CRI-O can be used with any CNI plugin.
  • Kubernetes traffic discovery
    3 projects | dev.to | 4 Jun 2023
    In generic Kubernetes network policies, there is no action field. The Calico CNI plugin (Kubernetes network plugin that implements the Container Network Interface) provides this functionality, and in particular provides logging even for allowed traffic. And this worked when we tried it in our test clusters and in our own back end.
  • Docker Container to get IP by external DHCP
    1 project | /r/docker | 7 Apr 2023
    There is a CNI spec: https://github.com/containernetworking/cni/blob/main/SPEC.md which allows for custom network plugins. Thats how AWS/EKS nodes are able to assign VPC routable IPs to containers running on them.
  • Minikube now supports rootless podman driver for running Kubernetes
    11 projects | news.ycombinator.com | 22 Jun 2022
    um, they aren't missing anything (but see below). they are k8s.

    so if you want to get the genuine original mainline experience you go to the project's github repo, they have releases, and mention that the detailed changelog has links to the binaries. yeey. (https://github.com/kubernetes/kubernetes/blob/master/CHANGEL... .. the client is the kubectl binary, the server has the control plane components the node binaries have the worker node stuff), you then have the option to set those up according to the documentation (generate TLS certs, specify the IP address range for pods (containers), install dependencies like etcd, and a CNI compatible container network layer provider -- if you have setup overlay networking eg. VXLAN or geneve or something fancy with openvswitch's OVN -- then the reference CNI plugin is probably sufficient)

    at the end of this process you'll have the REST API (kube-apiserver) up and running and you can start submitting jobs (that will be persisted into etcd, eventually picked up by the scheduler control loop that calculates what should run where and persists it back to etcd, then a control loop on a particular worker will notice that something new is assigned to it, and it'll do the thing, allocate a pod, call CNI to allocate IP, etc.)

    of course if you don't want to do all this by hand you can use a distribution that helps you with setup.

    microk8s is a low-memory low-IO k8s distro by Canonical (Ubuntu folks) and they run dqlite (distributed sqlite) instead of etcd (to lower I/O and memory requirements), many people don't like it because it uses snaps

    k3s is started by Rancher folks (and mostly still developed by them?),

    there's k0s (for bare metal ... I have no idea what that means though), kind (kubernetes in docker), there's also k3d (k3s in docker)

    these distributions work by consuming/wrapping the k8s components as go libraries - https://github.com/kubernetes/kubernetes/blob/master/staging...

    ...

    then there's the whole zoo of various k8s plugins/addons/tools for networking (CNI - https://github.com/containernetworking/cni#3rd-party-plugins), storage (CSI - https://kubernetes-csi.github.io/docs/drivers.html), helm for package management, a ton of security-related things that try to spot errors in all this circus ... and so on.

  • How to install Weave's Ignite for Firecracker VMs with simple script
    3 projects | dev.to | 20 Feb 2022
    #! /usr/bin/bash # Update apt-get repository and install dependencies apt-get update && apt-get install -y --no-install-recommends dmsetup openssh-client git binutils # Install containerd if it's not present -- prevents breaking docker-ce installations which containerd || apt-get install -y --no-install-recommends containerd # Installing CNI # Current version from https://github.com/containernetworking/cni/releases export CNI_VERSION=v1.0.1 ARCH=$([ "$(uname -m)" = "x86_64" ] && echo amd64 || echo arm64) export ARCH sudo mkdir -p /opt/cni/bin curl -sSL "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-linux-${ARCH}-${CNI_VERSION}.tgz" | sudo tar -xz -C /opt/cni/bin # Installing Ignite # Get the current version from https://github.com/weaveworks/ignite/releases export VERSION=v0.10.0 GOARCH=$(go env GOARCH 2>/dev/null || echo "amd64") export GOARCH for binary in ignite ignited; do echo "Installing ${binary}..." curl -sfLo ${binary} "https://github.com/weaveworks/ignite/releases/download/${VERSION}/${binary}-${GOARCH}" chmod +x ${binary} sudo mv ${binary} /usr/local/bin done # Check if the installation was successful ignite version
  • Solving Four Kubernetes Networking Challenges
    2 projects | dev.to | 18 Jan 2022
    The Container Network Interface (CNI) includes a specification for writing network plugins to configure network interfaces. This allows you to create overlay networks that satisfy Pod-to-Pod communication requirements.
  • k8s-the-hard-way
    11 projects | dev.to | 26 Oct 2021
    In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: runc, container networking plugins, containerd, kubelet, and kube-proxy.
  • Kubernetes Network Policies: A Practitioner's Guide
    2 projects | dev.to | 9 Sep 2021
    CNI type plugins follow the Container Network Interface spec and are used by the community to create advanced featured plugins. On the other hand, Kubenet utilizes bridge and host-local CNI plugins and has basic features.
  • Release πŸŽ‰ CNI v1.0.1 πŸŽ‰ Β· containernetworking/cni
    1 project | /r/devopsish | 8 Sep 2021

smi-spec

Posts with mentions or reviews of smi-spec. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-08.
  • A Comprehensive Guide to API Gateways, Kubernetes Gateways, and Service Meshes
    9 projects | dev.to | 8 Jun 2023
    The Service Mesh Interface (SMI) specification was created to solve this portability issue.
  • Service Mesh Use Cases
    2 projects | news.ycombinator.com | 11 Feb 2023
    > I suspect if a Service Mesh is ultimately shown to have broad value, one will make it's way into the K8S core

    I'm not so sure. I suspect it'll follow the same roadmap as Gateway API, which it already kind of is with the Service Mesh Interface (https://smi-spec.io/)

  • Service Mesh Considerations
    7 projects | dev.to | 14 Dec 2022
    It is very common that a service mesh deploys a control plane and a data plane. The control plane does what you might expect; it controls the service mesh and gives you the ability to interact with it. Many service meshes implement the Service Mesh Interface (SMI) which is an API specification to standardize the way cluster operators interact with and implement features.
  • Kubernetes: Cross-cluster traffic scheduling - Access control
    2 projects | dev.to | 11 Dec 2022
    Before we start, let's review the SMI Access Control Specification. There are two forms of traffic policies in osm-edge: Permissive Mode and Traffic Policy Mode. The former allows services in the mesh to access each other, while the latter requires the provision of the appropriate traffic policy to be accessible.
  • Announcing osm-edge 1.1: ARM support and more
    7 projects | dev.to | 28 Jul 2022
    osm-edge is a simple, complete, and standalone service mesh and ships out-of-the-box with all the necessary components to deploy a complete service mesh. As a lightweight and SMI-compatible Service Mesh, osm-edge is designed to be intuitive and scalable.
  • KubeCon 2022 - Jour 1
    2 projects | dev.to | 18 May 2022
  • Kubernetes State Of The Union β€” KubeCon 2019, San Diego
    3 projects | /r/kubernetes | 21 Mar 2022
    I started on Monday, attending ServiceMeshCon2019. My guesstimate is that about 1000 people attended it. I believe Service Mesh is playing such a crucial role in scaling cloud native technologies that large scale cloud-native deployments may not be possible without service mesh. Just like you cannot really succeed in deploying a microservices based application without a microservices orchestration engine, like Kubernetes, you cannot scale the size and capacity of a microservices-based application without service mesh. That’s what makes it so compelling to see all the service mesh creators β€” Istio, Linkerd, Consul, Kuma β€” and listen to them. There was also a lot of discussion of SMI (Service Mesh Interface) β€” a common interface among all services mesh. The panel at the end of the day included all the major service mesh players, and some very thought provoking questions were asked and answered by the panel.
  • GraphQL - Usecase and Architecture
    8 projects | dev.to | 29 Jul 2021
    Do you need a Service Mesh?
  • Introducing the Cloud Native Compute Foundation (CNCF)
    6 projects | dev.to | 13 Jul 2021
    In the episode with Annie, she gave a great overview of the CNCF and a handful of projects that she's excited about. Those include Helm, Linkerd, Kudo, Keda and Artifact Hub. I gave a bonus example of the Service Mesh Interface project.
  • Service Mesh Interface
    2 projects | dev.to | 2 May 2021
    SMI official website: https://smi-spec.io

What are some alternatives?

When comparing cni and smi-spec you can also consider the following projects:

CoreDNS - CoreDNS is a DNS server that chains plugins

cloudwithchris.com - Cloud With Chris is my personal blogging, podcasting and vlogging platform where I talk about all things cloud. I also invite guests to talk about their experiences with the cloud and hear about lessons learned along their journey.

containerlab - container-based networking labs

emissary - open source Kubernetes-native API gateway for microservices built on the Envoy Proxy

cri-api - Container Runtime Interface (CRI) – a plugin interface which enables kubelet to use a wide variety of container runtimes.

pipy - Pipy is a programmable proxy for the cloud, edge and IoT.

containerd - An open and reliable container runtime

osm-edge - osm-edge is a lightweight service mesh for the edge-computing. It's forked from openservicemesh/osm and use pipy as sidecar proxy.

k8s-the-hard-way

kubefed - Kubernetes Cluster Federation

virtual-kubelet - Virtual Kubelet is an open source Kubernetes kubelet implementation.

envoy - Cloud-native high-performance edge/middle/service proxy