smi-spec
runc
smi-spec | runc | |
---|---|---|
12 | 32 | |
1,047 | 11,428 | |
- | 0.6% | |
2.7 | 9.3 | |
7 months ago | 6 days ago | |
Makefile | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
smi-spec
-
A Comprehensive Guide to API Gateways, Kubernetes Gateways, and Service Meshes
The Service Mesh Interface (SMI) specification was created to solve this portability issue.
-
Service Mesh Use Cases
> I suspect if a Service Mesh is ultimately shown to have broad value, one will make it's way into the K8S core
I'm not so sure. I suspect it'll follow the same roadmap as Gateway API, which it already kind of is with the Service Mesh Interface (https://smi-spec.io/)
-
Service Mesh Considerations
It is very common that a service mesh deploys a control plane and a data plane. The control plane does what you might expect; it controls the service mesh and gives you the ability to interact with it. Many service meshes implement the Service Mesh Interface (SMI) which is an API specification to standardize the way cluster operators interact with and implement features.
-
Kubernetes: Cross-cluster traffic scheduling - Access control
Before we start, let's review the SMI Access Control Specification. There are two forms of traffic policies in osm-edge: Permissive Mode and Traffic Policy Mode. The former allows services in the mesh to access each other, while the latter requires the provision of the appropriate traffic policy to be accessible.
-
Announcing osm-edge 1.1: ARM support and more
osm-edge is a simple, complete, and standalone service mesh and ships out-of-the-box with all the necessary components to deploy a complete service mesh. As a lightweight and SMI-compatible Service Mesh, osm-edge is designed to be intuitive and scalable.
- KubeCon 2022 - Jour 1
-
Kubernetes State Of The Union — KubeCon 2019, San Diego
I started on Monday, attending ServiceMeshCon2019. My guesstimate is that about 1000 people attended it. I believe Service Mesh is playing such a crucial role in scaling cloud native technologies that large scale cloud-native deployments may not be possible without service mesh. Just like you cannot really succeed in deploying a microservices based application without a microservices orchestration engine, like Kubernetes, you cannot scale the size and capacity of a microservices-based application without service mesh. That’s what makes it so compelling to see all the service mesh creators — Istio, Linkerd, Consul, Kuma — and listen to them. There was also a lot of discussion of SMI (Service Mesh Interface) — a common interface among all services mesh. The panel at the end of the day included all the major service mesh players, and some very thought provoking questions were asked and answered by the panel.
-
GraphQL - Usecase and Architecture
Do you need a Service Mesh?
-
Introducing the Cloud Native Compute Foundation (CNCF)
In the episode with Annie, she gave a great overview of the CNCF and a handful of projects that she's excited about. Those include Helm, Linkerd, Kudo, Keda and Artifact Hub. I gave a bonus example of the Service Mesh Interface project.
-
Service Mesh Interface
SMI official website: https://smi-spec.io
runc
-
Nanos – A Unikernel
I can speak to this. Containers, and by extension k8s, break a well known security boundary that has existed for a very long time - whether you are using a real (hardware) server or a virtual machine on the cloud if you pop that instance/server generally speaking you only have access to that server. Yeh, you might find a db config with connection details if you landed on say a web app host but in general you still have to work to start popping the next N servers.
That's not the case when you are running in k8s and the last container breakout was just announced ~1 month ago: https://github.com/opencontainers/runc/security/advisories/G... .
At the end of the day it is simply not a security boundary. It can solve other problems but not security ones.
- Several container breakouts due to internally leaked fds
- Container breakout through process.cwd trickery and leaked fds
-
US Cybersecurity: The Urgent Need for Memory Safety in Software Products
It's interesting that, in light of things like this, you still see large software companies adding support for new components written in non-memory safe languages (e.g. C)
As an example Red Hat OpenShift added support for crun(https://github.com/containers/crun) this year(https://cloud.redhat.com/blog/whats-new-in-red-hat-openshift...), which is written in C as an alternative to runc, which is written in Go(https://github.com/opencontainers/runc)...
-
Run Firefox on ChromeOS
Rabbit hole indeed. That wasn't related to my job at the time, lol. The job change came with a company-provided computer and that put an end to the tinkering.
BTW, I found my hacks to make runc run on Chromebook: https://github.com/opencontainers/runc/compare/main...gabrys...
-
Crun: Fast and lightweight OCI runtime and C library for running containers
being the main author of crun, I can clarify that statement: I am not a fan of Go _for this particular use case_.
Using C instead of Go avoided a bunch of the workarounds that exists in runc to workaround the Go runtime, e.g. https://github.com/opencontainers/runc/blob/main/libcontaine...
-
Best virtualization solution with Ubuntu 22.04
runc
-
Bringing Memory Safety to sudo and su - with Ferrous Systems and Tweedegolf
Not OP, but if I had to guess, a lot of this can be picked up by just observing common security issues in the Linux space, since similar mistakes and oversights have caused quite a few real-world CVEs in the past, e.g. this random example of a TOCTTOU vulnerability in runc.
- Containers - entre historia y runtimes
- [email protected]+incompatible with ubuntu 22.04 on arm64 ?
What are some alternatives?
cni - Container Network Interface - networking for Linux containers
crun - A fast and lightweight fully featured OCI runtime and C library for running containers
cloudwithchris.com - Cloud With Chris is my personal blogging, podcasting and vlogging platform where I talk about all things cloud. I also invite guests to talk about their experiences with the cloud and hear about lessons learned along their journey.
Moby - The Moby Project - a collaborative project for the container ecosystem to assemble container-based systems
emissary - open source Kubernetes-native API gateway for microservices built on the Envoy Proxy
youki - A container runtime written in Rust
pipy - Pipy is a programmable proxy for the cloud, edge and IoT.
podman - Podman: A tool for managing OCI containers and pods.
osm-edge - osm-edge is a lightweight service mesh for the edge-computing. It's forked from openservicemesh/osm and use pipy as sidecar proxy.
containerd - An open and reliable container runtime
kubefed - Kubernetes Cluster Federation
conmon - An OCI container runtime monitor.