network-mapper
cni
network-mapper | cni | |
---|---|---|
10 | 13 | |
570 | 5,318 | |
0.5% | 0.8% | |
8.7 | 7.7 | |
3 days ago | 6 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
network-mapper
- Network Mapper – low privileges, no-eBPF network observability tool for K8s
-
Otterize launches open-source, declarative IAM permissions for workloads on AWS EKS clusters
Yep! When you deploy Otterize, you get a map of your cluster’s traffic, with zero-configuration, through the open-source network-mapper.
-
Kubernetes traffic discovery
After multiple iterations, research sessions and some trial & error, we could produce an exportable list of network connections in any Kubernetes cluster. You might recall that our larger goal was to get to a logical (functional) map of pod-to-pod traffic, and that will be covered in a future posting. After adding that capability, here’s an example output from our project, now called network-mapper, when pointed at one of the clusters in our “lab” environment:
- Show HN: Visualize Kubernetes Clusters
-
Visualizing Kubernetes traffic, the non-invasive way
It'll require some changes but you can go for it if that's something up your alley, as after all, it's all open source - https://github.com/otterize/network-mapper
- GitHub - otterize/network-mapper: Map Kubernetes in-cluster traffic and export as text, intents, or an image
-
Open-source Kubernetes traffic visualizer - Otterize network mapper
We received some great feedback from the community regarding our tool, and one of the most commonly requested features was visualization. So we embedded this functionality into the tool, and now you can easily map and visualize your cluster with a single CLI command.
-
Alternative to Network Policys
As you've mentioned, it is not possible to define deny rules using the native NetworkPolicy resource. Instead, you could use your CNI’s implementation for network policies. If you use Calico as your CNI you can use Calico's network policies to create deny rules. You can also take a look at Otterize OSS, an open-source solution my team and I are working on recently. It simplifies network policies by defining them from the client’s perspective in a ClientIntents resource. You can use the network mapper to auto-generate those ClientIntents from the traffic in your cluster, and then deploy them and let the intents-operator manage the network policies for you.
- Otterize network mapper - map Kubernetes in-cluster traffic with zero-config
cni
-
Kubernetes Architecture
The CNI is language-agnostic and there are many different plugins available.
-
Creating Kubernetes Cluster With CRI-O
Read more about the architecture of CRI-O here. The networking of the pod is set up through CNI, and CRI-O can be used with any CNI plugin.
-
Kubernetes traffic discovery
In generic Kubernetes network policies, there is no action field. The Calico CNI plugin (Kubernetes network plugin that implements the Container Network Interface) provides this functionality, and in particular provides logging even for allowed traffic. And this worked when we tried it in our test clusters and in our own back end.
-
Docker Container to get IP by external DHCP
There is a CNI spec: https://github.com/containernetworking/cni/blob/main/SPEC.md which allows for custom network plugins. Thats how AWS/EKS nodes are able to assign VPC routable IPs to containers running on them.
-
Minikube now supports rootless podman driver for running Kubernetes
um, they aren't missing anything (but see below). they are k8s.
so if you want to get the genuine original mainline experience you go to the project's github repo, they have releases, and mention that the detailed changelog has links to the binaries. yeey. (https://github.com/kubernetes/kubernetes/blob/master/CHANGEL... .. the client is the kubectl binary, the server has the control plane components the node binaries have the worker node stuff), you then have the option to set those up according to the documentation (generate TLS certs, specify the IP address range for pods (containers), install dependencies like etcd, and a CNI compatible container network layer provider -- if you have setup overlay networking eg. VXLAN or geneve or something fancy with openvswitch's OVN -- then the reference CNI plugin is probably sufficient)
at the end of this process you'll have the REST API (kube-apiserver) up and running and you can start submitting jobs (that will be persisted into etcd, eventually picked up by the scheduler control loop that calculates what should run where and persists it back to etcd, then a control loop on a particular worker will notice that something new is assigned to it, and it'll do the thing, allocate a pod, call CNI to allocate IP, etc.)
of course if you don't want to do all this by hand you can use a distribution that helps you with setup.
microk8s is a low-memory low-IO k8s distro by Canonical (Ubuntu folks) and they run dqlite (distributed sqlite) instead of etcd (to lower I/O and memory requirements), many people don't like it because it uses snaps
k3s is started by Rancher folks (and mostly still developed by them?),
there's k0s (for bare metal ... I have no idea what that means though), kind (kubernetes in docker), there's also k3d (k3s in docker)
these distributions work by consuming/wrapping the k8s components as go libraries - https://github.com/kubernetes/kubernetes/blob/master/staging...
...
then there's the whole zoo of various k8s plugins/addons/tools for networking (CNI - https://github.com/containernetworking/cni#3rd-party-plugins), storage (CSI - https://kubernetes-csi.github.io/docs/drivers.html), helm for package management, a ton of security-related things that try to spot errors in all this circus ... and so on.
-
How to install Weave's Ignite for Firecracker VMs with simple script
#! /usr/bin/bash # Update apt-get repository and install dependencies apt-get update && apt-get install -y --no-install-recommends dmsetup openssh-client git binutils # Install containerd if it's not present -- prevents breaking docker-ce installations which containerd || apt-get install -y --no-install-recommends containerd # Installing CNI # Current version from https://github.com/containernetworking/cni/releases export CNI_VERSION=v1.0.1 ARCH=$([ "$(uname -m)" = "x86_64" ] && echo amd64 || echo arm64) export ARCH sudo mkdir -p /opt/cni/bin curl -sSL "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-linux-${ARCH}-${CNI_VERSION}.tgz" | sudo tar -xz -C /opt/cni/bin # Installing Ignite # Get the current version from https://github.com/weaveworks/ignite/releases export VERSION=v0.10.0 GOARCH=$(go env GOARCH 2>/dev/null || echo "amd64") export GOARCH for binary in ignite ignited; do echo "Installing ${binary}..." curl -sfLo ${binary} "https://github.com/weaveworks/ignite/releases/download/${VERSION}/${binary}-${GOARCH}" chmod +x ${binary} sudo mv ${binary} /usr/local/bin done # Check if the installation was successful ignite version
-
Solving Four Kubernetes Networking Challenges
The Container Network Interface (CNI) includes a specification for writing network plugins to configure network interfaces. This allows you to create overlay networks that satisfy Pod-to-Pod communication requirements.
-
k8s-the-hard-way
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: runc, container networking plugins, containerd, kubelet, and kube-proxy.
-
Kubernetes Network Policies: A Practitioner's Guide
CNI type plugins follow the Container Network Interface spec and are used by the community to create advanced featured plugins. On the other hand, Kubenet utilizes bridge and host-local CNI plugins and has basic features.
- Release 🎉 CNI v1.0.1 🎉 · containernetworking/cni
What are some alternatives?
echopod - The minimal HTTP server that provides info about container/pod.
CoreDNS - CoreDNS is a DNS server that chains plugins
tic-tac-toe - 🎮 Tic Tac Toe implementation over network 🌐
containerlab - container-based networking labs
intents-operator - Manage network policies, AWS, GCP & Azure IAM policies, Istio Authorization Policies, and Kafka ACLs in a Kubernetes cluster with ease.
cri-api - Container Runtime Interface (CRI) – a plugin interface which enables kubelet to use a wide variety of container runtimes.
grafana-operator - An operator for Grafana that installs and manages Grafana instances, Dashboards and Datasources through Kubernetes/OpenShift CRs
containerd - An open and reliable container runtime
kubeshark - The API traffic analyzer for Kubernetes providing real-time K8s protocol-level visibility, capturing and monitoring all traffic and payloads going in, out and across containers, pods, nodes and clusters. Inspired by Wireshark, purposely built for Kubernetes
k8s-the-hard-way
kubetunnel - Develop microservices locally while being connected to your Kubernetes environment
virtual-kubelet - Virtual Kubelet is an open source Kubernetes kubelet implementation.