smi-spec
kubefed
smi-spec | kubefed | |
---|---|---|
12 | 7 | |
1,047 | 2,476 | |
- | - | |
2.7 | 6.6 | |
7 months ago | about 1 year ago | |
Makefile | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
smi-spec
-
A Comprehensive Guide to API Gateways, Kubernetes Gateways, and Service Meshes
The Service Mesh Interface (SMI) specification was created to solve this portability issue.
-
Service Mesh Use Cases
> I suspect if a Service Mesh is ultimately shown to have broad value, one will make it's way into the K8S core
I'm not so sure. I suspect it'll follow the same roadmap as Gateway API, which it already kind of is with the Service Mesh Interface (https://smi-spec.io/)
-
Service Mesh Considerations
It is very common that a service mesh deploys a control plane and a data plane. The control plane does what you might expect; it controls the service mesh and gives you the ability to interact with it. Many service meshes implement the Service Mesh Interface (SMI) which is an API specification to standardize the way cluster operators interact with and implement features.
-
Kubernetes: Cross-cluster traffic scheduling - Access control
Before we start, let's review the SMI Access Control Specification. There are two forms of traffic policies in osm-edge: Permissive Mode and Traffic Policy Mode. The former allows services in the mesh to access each other, while the latter requires the provision of the appropriate traffic policy to be accessible.
-
Announcing osm-edge 1.1: ARM support and more
osm-edge is a simple, complete, and standalone service mesh and ships out-of-the-box with all the necessary components to deploy a complete service mesh. As a lightweight and SMI-compatible Service Mesh, osm-edge is designed to be intuitive and scalable.
- KubeCon 2022 - Jour 1
-
Kubernetes State Of The Union — KubeCon 2019, San Diego
I started on Monday, attending ServiceMeshCon2019. My guesstimate is that about 1000 people attended it. I believe Service Mesh is playing such a crucial role in scaling cloud native technologies that large scale cloud-native deployments may not be possible without service mesh. Just like you cannot really succeed in deploying a microservices based application without a microservices orchestration engine, like Kubernetes, you cannot scale the size and capacity of a microservices-based application without service mesh. That’s what makes it so compelling to see all the service mesh creators — Istio, Linkerd, Consul, Kuma — and listen to them. There was also a lot of discussion of SMI (Service Mesh Interface) — a common interface among all services mesh. The panel at the end of the day included all the major service mesh players, and some very thought provoking questions were asked and answered by the panel.
-
GraphQL - Usecase and Architecture
Do you need a Service Mesh?
-
Introducing the Cloud Native Compute Foundation (CNCF)
In the episode with Annie, she gave a great overview of the CNCF and a handful of projects that she's excited about. Those include Helm, Linkerd, Kudo, Keda and Artifact Hub. I gave a bonus example of the Service Mesh Interface project.
-
Service Mesh Interface
SMI official website: https://smi-spec.io
kubefed
-
Scaling Kubernetes to multiple clusters and regions
The project is similar (in spirit) to kubefed.
-
Build a Federation of Multiple Kubernetes Clusters With Kubefed V2
What Is KubeFed? KubeFed (Kubernetes Cluster Federation) allows you to use a single Kubernetes cluster to coordinate multiple Kubernetes clusters. It can deploy multiple-cluster applications in different regions and design for disaster recovery. To learn more about KubeFed: https://github.com/kubernetes-sigs/kubefed
-
Evolution of code deployment tools at Mixpanel
There's active work on a standard called kubefed [0] that is being worked on.
> I want a scale-to-zero node-pool in every region, and one kube master api for the world.
Personally, I'd generalize this to: "I want to describe the reliability requirements and configuration for my software and have an automated system solve for where, how many, when, and how to route to it"
I want to have something where I can say "I need to have high availability, lowest latency, and X GB of RAM and Y cores" and have a system automatically schedule me wherever compute is cheapest while also intelligently routing traffic to my servers based on client origins.
[0] - https://github.com/kubernetes-sigs/kubefed
-
Building a Kubernetes-based Solution in a Hybrid Environment by Using KubeMQ
Two of the more common approaches to deploying Kubernetes in hybrid environments are from cloud-to-cloud and cloud to on-prem. Whether this is from using a single control plane like Rancher, Platform9, or Gardener to create multiple clusters that are managed from a single location, or utilizing Kubernetes federation to create a cluster that spans different regions, this model has become a key feature offered by Kubernetes that has helped drive adoption.
-
Infrastructure Engineering — Deployment Strategies
This is made possible by the very nature of Kubernetes being a standard portable platform across cloud providers, ability to manage infrastructure as code, ability to setup networking between them whenever needed with the help of multi-cluster service meshes and also due to the ability to orchestrate the deployments using Kubefed and Crossplane.
-
Architecting your Cloud Native Infrastructure
And the interesting thing about networking in cloud is that it need not be just be limited to the cloud provider within your region but can span across multiple providers across multiple regions as needed and this is where projects like Kubefed, Crossplane definitely does help.
-
Infrastructure Engineering - Diving Deep
Projects like Kubefed and Crossplane are especially useful here since they help you to manage and orchestrate clusters and the requests you send across different cloud providers even if its going to be across regions.
What are some alternatives?
cni - Container Network Interface - networking for Linux containers
crossplane - The Cloud Native Control Plane
cloudwithchris.com - Cloud With Chris is my personal blogging, podcasting and vlogging platform where I talk about all things cloud. I also invite guests to talk about their experiences with the cloud and hear about lessons learned along their journey.
karmada - Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration
emissary - open source Kubernetes-native API gateway for microservices built on the Envoy Proxy
virtual-kubelet - Virtual Kubelet is an open source Kubernetes kubelet implementation.
pipy - Pipy is a programmable proxy for the cloud, edge and IoT.
velero - Backup and migrate Kubernetes applications and their persistent volumes
osm-edge - osm-edge is a lightweight service mesh for the edge-computing. It's forked from openservicemesh/osm and use pipy as sidecar proxy.
rook - Storage Orchestration for Kubernetes
envoy - Cloud-native high-performance edge/middle/service proxy
OpenFaaS - OpenFaaS - Serverless Functions Made Simple