k3s
fsm
k3s | fsm | |
---|---|---|
7 | 5 | |
15,937 | 41 | |
- | - | |
9.2 | 9.3 | |
about 3 years ago | about 23 hours ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
k3s
-
Kubernetes: Multi-cluster communication with Flomesh Service Mesh (Part 2)
In this demo, we will be using k3d a lightweight wrapper to run k3s (Rancher Lab’s minimal Kubernetes distribution) in docker, to create 4 separate clusters named control-plane, cluster-1, cluster-2, and cluster-3 respectively.
-
Pipy: Protecting Kubernetes Apps from SQL Injection & XSS Attacks
To run the demo locally, we recommend k3d a lightweight wrapper to run k3s (Rancher Lab’s minimal Kubernetes distribution) in docker.
-
When a node goes down, how long should k8s wait before migrating pods to other nodes?
I've been messing around with k8s (k3s) lately, and got to the "issue" of downtime/inconsistencies caused by one of multiple workers being down, who had pods running on them. I found a couple useful parameters here that helped me reduce the time needed to redeploy the old pods on other nodes, as well as stop sending requests to the NotReady node. But that got me thinking, how long should k8s wait before doing these things? Or is there perhaps a better option for increasing avaliability?
-
Kubernetes Development Environments – A Comparison
Local Kubernetes clusters are clusters that are running on the individual computer of the developer. There are many tools that provide such an environment, such as Minikube, microk8s, k3s, or kind. While they are not all the same, their use as a development environment is quite comparable.
-
Local Cluster vs. Remote Cluster for Kubernetes-Based Development
Since the developer is the only one who has to access this cluster for development, local clusters can be a feasible solution for this purpose. Over time, several solutions have emerged that are particularly made for running Kubernetes in local environments. The most important ones are Kubernetes in Docker (kind), MicroK8s, minikube and k3s. For a comparison of these local Kubernetes options, you can look at this post.
-
Kubernetes: Virtual Clusters As Development Environments
With local Kubernetes environments such as minikube or k3s, developers can create their own Kubernetes clusters on their local computers. This often leads to developers struggling with the management and setup of these pared-down Kubernetes technologies that are also not completely realistic compared to “real-world”, cloud-based environments. The upside of this approach is that the developers have full control over their environment and can independently create it whenever they need it.
-
[Recap] The API Hangout #31
K3d - a lightweight wrapper to run k3s in docker.
fsm
-
Kubernetes: Multi-cluster communication with Flomesh Service Mesh (Part 2)
In part 1 of this series, we briefly touched on the use cases for multi-cluster requirements and talked about the motivation and goals of FSM and its architecture. In this part of the series we demonstrated how to implement cross-cluster traffic scheduling and load balancing of services, and try three different global traffic policies: local cluster scheduling only, failover, and global load balancing.
-
Kubernetes: Multi-cluster communication with Flomesh Service Mesh (Demo)
In Part 1 we covered the motives, goals, and architecture of Flomesh Service Mesh and in this blog post we are going to demonstrate how to use FSM and lightweight SMI-compatible Service Mesh osm-edge to achieve multi-cluster service discovery & communication.
-
Kubernetes: Multi-cluster communication with Flomesh Service Mesh
Flomesh Service Mesh(FSM) from Flomesh is a Kubernetes North-South traffic manager, that provides Ingress controllers, Gateway API, Load Balancer, and cross-cluster service registration and service discovery. FSM uses Pipy - a programmable network proxy, as its data plane and is suitable for cloud, edge, and IoT.
-
osm-edge: Using access control policies to access services with the service mesh
FSM Ingress controller
-
Announcing osm-edge 1.1: ARM support and more
osm-edge 1.1 comes bundled with Flomesh Service Mesh (FSM) a Kubernetes North-South traffic manager, provides Ingress controllers, Gateway API, Load Balancer, and cross-cluster service registration and service discovery.
What are some alternatives?
minikube - Run Kubernetes locally
osm-edge - osm-edge is a lightweight service mesh for the edge-computing. It's forked from openservicemesh/osm and use pipy as sidecar proxy.
devspace-plugin-loft - Loft Plugin for DevSpace - adds commands like `devspace create space` or `devspace create vcluster` to DevSpace
pipy - Pipy is a programmable proxy for the cloud, edge and IoT.
cilium - eBPF-based Networking, Security, and Observability
k3s - Lightweight Kubernetes
multi-tenancy - A working place for multi-tenancy related proposals and prototypes.
enhancements - Enhancements tracking repo for Kubernetes
kubefwd - Bulk port forwarding Kubernetes services for local development.
osm - Open Service Mesh (OSM) is a lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
k3v - Virtual Kubernetes
smi-spec - Service Mesh Interface