smi-spec
virtual-kubelet
smi-spec | virtual-kubelet | |
---|---|---|
12 | 10 | |
1,047 | 4,085 | |
- | 0.6% | |
2.7 | 6.7 | |
7 months ago | about 20 hours ago | |
Makefile | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
smi-spec
-
A Comprehensive Guide to API Gateways, Kubernetes Gateways, and Service Meshes
The Service Mesh Interface (SMI) specification was created to solve this portability issue.
-
Service Mesh Use Cases
> I suspect if a Service Mesh is ultimately shown to have broad value, one will make it's way into the K8S core
I'm not so sure. I suspect it'll follow the same roadmap as Gateway API, which it already kind of is with the Service Mesh Interface (https://smi-spec.io/)
-
Service Mesh Considerations
It is very common that a service mesh deploys a control plane and a data plane. The control plane does what you might expect; it controls the service mesh and gives you the ability to interact with it. Many service meshes implement the Service Mesh Interface (SMI) which is an API specification to standardize the way cluster operators interact with and implement features.
-
Kubernetes: Cross-cluster traffic scheduling - Access control
Before we start, let's review the SMI Access Control Specification. There are two forms of traffic policies in osm-edge: Permissive Mode and Traffic Policy Mode. The former allows services in the mesh to access each other, while the latter requires the provision of the appropriate traffic policy to be accessible.
-
Announcing osm-edge 1.1: ARM support and more
osm-edge is a simple, complete, and standalone service mesh and ships out-of-the-box with all the necessary components to deploy a complete service mesh. As a lightweight and SMI-compatible Service Mesh, osm-edge is designed to be intuitive and scalable.
- KubeCon 2022 - Jour 1
-
Kubernetes State Of The Union — KubeCon 2019, San Diego
I started on Monday, attending ServiceMeshCon2019. My guesstimate is that about 1000 people attended it. I believe Service Mesh is playing such a crucial role in scaling cloud native technologies that large scale cloud-native deployments may not be possible without service mesh. Just like you cannot really succeed in deploying a microservices based application without a microservices orchestration engine, like Kubernetes, you cannot scale the size and capacity of a microservices-based application without service mesh. That’s what makes it so compelling to see all the service mesh creators — Istio, Linkerd, Consul, Kuma — and listen to them. There was also a lot of discussion of SMI (Service Mesh Interface) — a common interface among all services mesh. The panel at the end of the day included all the major service mesh players, and some very thought provoking questions were asked and answered by the panel.
-
GraphQL - Usecase and Architecture
Do you need a Service Mesh?
-
Introducing the Cloud Native Compute Foundation (CNCF)
In the episode with Annie, she gave a great overview of the CNCF and a handful of projects that she's excited about. Those include Helm, Linkerd, Kudo, Keda and Artifact Hub. I gave a bonus example of the Service Mesh Interface project.
-
Service Mesh Interface
SMI official website: https://smi-spec.io
virtual-kubelet
-
Bare-Metal Kubernetes, Part I: Talos on Hetzner
Speaking of k8s, anyone here know of ready-made solutions for getting XCode (i.e. xcodebuild) running in pods? As far as I'm aware, there are no good solutions for getting XCode running on Linux, so at the moment I'm just futzing about with a virtual-kubelet[0] implementation that spawns MacOS VMs. This works just fine, but the problem seems like such an obvious one that I expect there to be some existing solution(s) I just missed.
[0]:https://github.com/virtual-kubelet/virtual-kubelet/
-
Keeping Airflow tasks “cloud-native”
Have you looked into virtual kubelet yet? It allows you to make a virtual node in your on-prem cluster that schedules workloads on services like AWS Fargate or Azure Container Instances.
-
Similar to AWS Fargate provider?
If you are serious about implementing this yourself, you may want to look into virtual kubelet: https://virtual-kubelet.io/
- Nomad vs. Kubernetes
-
Deploy on prem Kubernetes. What is the best approach paid and unpaid to deploy a cluster on premise with burst to azure/aws? The only need is the ability to have some static pods. I do have a preference for free/open source solutions.
I just stumbled upon this project a while back and don't have experience with it, so I don't know how well it works and what caveats you may face, but there's Virtual Kubelet, which aims to do just that, i.e. running a virtual Kubernetes node outside the cluster. Its Kip provider sounds like the thing you're looking for.
-
Create a fake node
Are you looking something like the following: https://github.com/virtual-kubelet/virtual-kubelet
-
How to use the GitOps model to create, update and manage applications at the edge with KubeEdge and Argo
Kubeedge docs are light on self-justification... How does https://github.com/kubeedge/kubeedge differ from https://github.com/virtual-kubelet/virtual-kubelet or just running a regular kubelet on that edge machine?
-
Autoscaling Redis applications on Kubernetes 🚀🚀
If this sounds interesting, do check out Virtual Nodes in Azure Kubernetes Service to see how you can use them to seamlessly scale your applications to Azure Container Instances and benefit from quick provisioning of pods, and only pay per second for their execution time. The virtual nodes add-on for AKS, is based on the open source project Virtual Kubelet which is an open source Kubernetes kubelet implementation.
-
Infrastructure Engineering - Diving Deep
Use cases like these are made possible by projects like KubeEdge , K3s and Virtual Kubelets. You can read more about how they power the edge with different architectures and compromises here.
-
Evolving Container Security with Linux User Namespaces
This is a complicated question to answer.
This isn't my expertise (the cluster orchestration system), but I can answer to the best of my abilities: Titus, today is a system that sits on top of Kubernetes, and uses Kubernetes components to do its thing, but we've substituted many of the systems with our own. For example, closer to my area of knowledge, we've used our own executor / provider along with the Virtual Kubelet project (https://github.com/virtual-kubelet/virtual-kubelet) instead of Kubelet.
We're exploring where we can leverage the Kubernetes ecosystem, adapt components, or help contribute changes back that others can leverage to enable our use of more COTS components of Kubernetes.
tl;dr: We're swapping out the engines while in flight
What are some alternatives?
cni - Container Network Interface - networking for Linux containers
kubeedge - Kubernetes Native Edge Computing Framework (project under CNCF)
cloudwithchris.com - Cloud With Chris is my personal blogging, podcasting and vlogging platform where I talk about all things cloud. I also invite guests to talk about their experiences with the cloud and hear about lessons learned along their journey.
kubevirt - Kubernetes Virtualization API and runtime in order to define and manage virtual machines.
emissary - open source Kubernetes-native API gateway for microservices built on the Envoy Proxy
kubefed - Kubernetes Cluster Federation
pipy - Pipy is a programmable proxy for the cloud, edge and IoT.
keda - KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
osm-edge - osm-edge is a lightweight service mesh for the edge-computing. It's forked from openservicemesh/osm and use pipy as sidecar proxy.
charts - ⚠️(OBSOLETE) Curated applications for Kubernetes