buildkit VS runc

Compare buildkit vs runc and see what are their differences.

buildkit

concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit (by moby)

runc

CLI tool for spawning and running containers according to the OCI specification (by opencontainers)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
buildkit runc
52 32
7,606 11,339
2.3% 2.3%
9.8 9.3
about 18 hours ago 9 days ago
Go Go
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

buildkit

Posts with mentions or reviews of buildkit. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-03.
  • The worst thing about Jenkins is that it works
    12 projects | news.ycombinator.com | 3 Dec 2023
    > We are uding docker-in-docker at the moment

    You can also run a "less privileged" container with all the features of Docker by using rootless buildkit in Kubernetes. Here are some examples:

    https://github.com/moby/buildkit/tree/master/examples/kubern...

    https://github.com/moby/buildkit/blob/master/examples/kubern...

    It's also possible to run dedicated buildkitd workers and connect to them remotely.

  • macOS Containers v0.0.1
    24 projects | news.ycombinator.com | 26 Sep 2023
  • Jenkins Agents On Kubernetes
    7 projects | dev.to | 4 Sep 2023
    Now since Kubernetes works off of containerd I'll be taking a different approach on handling container builds by using nerdctl and the buildkit that comes bundled with it. I'll do this on the amd64 control plane node since it's beefier than my Raspberry Pi workers for handling builds and build related services. Go ahead and download and unpack the latest nerdctl release as of writing (make sure to check the release page in case there's a new one):
  • Cicada - CI/CD platform written with Rust
    2 projects | /r/rust | 25 Apr 2023
    Yeah, only Linux containers at the moment, BuildKit is the way we are constructing pipelines and doing caching. Split on if we will support non-linux hosts, but definitely want to find a good solution to not doing Docker-in-Docker.
  • Better support of Docker layer caching in Cargo
    2 projects | /r/rust | 30 Mar 2023
    Relevant issues are https://github.com/moby/buildkit/issues/3011 and https://github.com/moby/buildkit/issues/1512.
  • DockerHub replacement stratagy and options
    5 projects | /r/ipfs | 16 Mar 2023
    If you notice, the same thing I noticed in this list is that most of these are workarounds to support the web2 api on IPFS. There is a pull in draft for BuildKit that may make native IPFS image support better on the image build side. With the work on the nerdctl side being the most direct support for images for pushing and pulling images with IPFS hashes.
  • Why I joined Dagger
    3 projects | dev.to | 6 Feb 2023
    Last year I joined Dagger after realizing we were trying to solve all of the same problems (escaping YAML hell, unifying CI and dev workflows, minimizing CI overhead – more on all that later). We were even using the same underlying technology (Buildkit) and running into all of the same challenges.
  • Rails on Docker · Fly
    16 projects | news.ycombinator.com | 26 Jan 2023
    How would you do this in a generic, reusable way company-wide? Given that you don't know the targets beforehand, the names, or even the number of stages.

    It is of course possible to do for a single project with a bit of effort: build each stage with a remote OCI cache source, push the cash there after. But... that sucks.

    What you want is the `max` cache type in buildkit[1]. Except... not much supports that yet. The native S3 cache would also be good once it stabalizes.

    1. https://github.com/moby/buildkit#export-cache

    16 projects | news.ycombinator.com | 26 Jan 2023
    I know those questions are probably rhetorical, but to answer them anyway:

    > > Nice syntax

    > Is it though?

    The most common alternative is to use a backslash at the end of each line, to create a line continuation. This swallows the newline, so you also need a semicolon. Forgetting the semicolon leads to weird errors. Also, while Docker supports comments interspersed with line continuations, sh doesn't, so if such a command contains comments it can't be copied into sh.

    There heredoc syntax doesn't have any of these issues; I think it is infinitely better.

    (There is also JSON-style syntax, but it requires all backslashes to be doubled and is less popular.)

    *In practice "&&" is normally used rather than ";" in order to stop the build if any command fails (otherwise sh only propagates the exit status of the last command). This is actually a small footgun with the heredoc syntax, because it is tempting to just use a newline (equivalent to a semicolon). The programmer must remember to type "&&" after each command, or use `set -e` at the start of the RUN command, or use `SHELL ["/bin/sh", "-e", "-c"]` at the top of the Dockerfile. Sigh...

    > Are the line breaks semantic, or is it all a multiline string?

    The line breaks are preserved ("what you see is what you get").

    > Is EOF a special end-of-file token

    You can choose which token to use (EOF is a common convention, but any token can be used). The text right after the "<<" indicates which token you've chosen, and the heredoc is terminated by the first line that contains just that token.

    This allows you to easily create a heredoc containing other heredocs. Can you think of any other quoting syntax that allows that? (Lisp's quote form comes to mind.)

    > Where is it documented?

    The introduction blog post has already been linked. The reference documentation (https://github.com/moby/buildkit/blob/master/frontend/docker...) mentions but doesn't have a formal specification (unfortunately this is a wider problem for Dockerfiles, see https://supercontainers.github.io/containers-wg/ideas/docker... instead it links to the sh syntax (https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V...), on which the Dockerfile heredoc syntax is based.

    (Good luck looking up this syntax if you don't know what it's called. But that's the same for most punctuation-based syntax.)

    16 projects | news.ycombinator.com | 26 Jan 2023
    Unfortunately this syntax is not generally supported yet - it's only supported with the buildkit backend and only landed in the 1.3 "labs" release. It was moved to stable in early 2022 (see https://github.com/moby/buildkit/issues/2574), so that seems to be better, but I think may still require a syntax directive to enable.

    Many other dockerfile build tools still don't support it, e.g. buildah (see https://github.com/containers/buildah/issues/3474)

    Useful now if you have control over the environment your images are being built in, but I'm excited to the future where it's commonplace!

runc

Posts with mentions or reviews of runc. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-13.
  • Nanos – A Unikernel
    11 projects | news.ycombinator.com | 13 Mar 2024
    I can speak to this. Containers, and by extension k8s, break a well known security boundary that has existed for a very long time - whether you are using a real (hardware) server or a virtual machine on the cloud if you pop that instance/server generally speaking you only have access to that server. Yeh, you might find a db config with connection details if you landed on say a web app host but in general you still have to work to start popping the next N servers.

    That's not the case when you are running in k8s and the last container breakout was just announced ~1 month ago: https://github.com/opencontainers/runc/security/advisories/G... .

    At the end of the day it is simply not a security boundary. It can solve other problems but not security ones.

  • US Cybersecurity: The Urgent Need for Memory Safety in Software Products
    3 projects | news.ycombinator.com | 21 Sep 2023
    It's interesting that, in light of things like this, you still see large software companies adding support for new components written in non-memory safe languages (e.g. C)

    As an example Red Hat OpenShift added support for crun(https://github.com/containers/crun) this year(https://cloud.redhat.com/blog/whats-new-in-red-hat-openshift...), which is written in C as an alternative to runc, which is written in Go(https://github.com/opencontainers/runc)...

  • Run Firefox on ChromeOS
    3 projects | news.ycombinator.com | 8 Aug 2023
    Rabbit hole indeed. That wasn't related to my job at the time, lol. The job change came with a company-provided computer and that put an end to the tinkering.

    BTW, I found my hacks to make runc run on Chromebook: https://github.com/opencontainers/runc/compare/main...gabrys...

  • Crun: Fast and lightweight OCI runtime and C library for running containers
    7 projects | news.ycombinator.com | 4 Jun 2023
    being the main author of crun, I can clarify that statement: I am not a fan of Go _for this particular use case_.

    Using C instead of Go avoided a bunch of the workarounds that exists in runc to workaround the Go runtime, e.g. https://github.com/opencontainers/runc/blob/main/libcontaine...

  • Best virtualization solution with Ubuntu 22.04
    8 projects | /r/linuxquestions | 28 May 2023
    runc
  • Containers - entre historia y runtimes
    3 projects | dev.to | 26 Apr 2023
  • [email protected]+incompatible with ubuntu 22.04 on arm64 ?
    2 projects | /r/docker | 25 Apr 2023
  • Why did the Krustlet project die?
    6 projects | /r/kubernetes | 14 Jan 2023
    Yeah, runtimeClass lets you specify which CRI plugin you want based on what you have available. Here's an example from the containerd documentation - you could have one node that can run containers under standard runc, gvisor, kata containers, or WASM. Without runtimeClass, you'd need either some form of custom solution or four differently configured nodes to run those different runtimes. That's how krustlet did it - you'd have kubelet/containerd nodes and krustlet/wasm nodes, and could only run the appropriate workload on each node type.
  • Container Deep Dive 2: Container Engines
    3 projects | dev.to | 1 Dec 2022
    The CRI-O container engine provides a stable, more secure, and performant platform for running Open Container Initiative (OCI) compatible runtimes. CRI-Os purpose is to be the container engine that implements the Kubernetes Container Runtime Interface (CRI) for OpenShift Container Platform and Kubernetes, replacing the Docker service. Source
  • KubeFire : Créer et gèrer des clusters Kubernetes en utilisant des microVMs avec Firecracker …
    8 projects | dev.to | 11 Nov 2022
    root@kubefire:~# kubefire install INFO[2022-11-11T11:46:13Z] downloading https://raw.githubusercontent.com/innobead/kubefire/v0.3.8/scripts/install-prerequisites.sh to save /root/.kubefire/bin/v0.3.8/install-prerequisites.sh force=false version=v0.3.8 INFO[2022-11-11T11:46:14Z] running script (install-prerequisites.sh) version=v0.3.8 INFO[2022-11-11T11:46:14Z] running /root/.kubefire/bin/v0.3.8/install-prerequisites.sh version=v0.3.8 INFO[2022-11-11T11:46:14Z] + TMP_DIR=/tmp/kubefire INFO[2022-11-11T11:46:14Z] ++ go env GOARCH INFO[2022-11-11T11:46:14Z] ++ echo amd64 INFO[2022-11-11T11:46:14Z] + GOARCH=amd64 INFO[2022-11-11T11:46:14Z] + KUBEFIRE_VERSION=v0.3.8 INFO[2022-11-11T11:46:14Z] + CONTAINERD_VERSION=v1.6.6 + IGNITE_VERION=v0.10.0 INFO[2022-11-11T11:46:14Z] + CNI_VERSION=v1.1.1 + RUNC_VERSION=v1.1.3 INFO[2022-11-11T11:46:14Z] + '[' -z v0.3.8 ']' + '[' -z v1.6.6 ']' + '[' -z v0.10.0 ']' + '[' -z v1.1.1 ']' + '[' -z v1.1.3 ']' INFO[2022-11-11T11:46:14Z] ++ sed -E 's/(v[0-9]+\.[0-9]+\.[0-9]+)[a-zA-Z0-9\-]*/\1/g' INFO[2022-11-11T11:46:14Z] +++ echo v0.3.8 INFO[2022-11-11T11:46:14Z] + STABLE_KUBEFIRE_VERSION=v0.3.8 INFO[2022-11-11T11:46:14Z] + rm -rf /tmp/kubefire INFO[2022-11-11T11:46:14Z] + mkdir -p /tmp/kubefire INFO[2022-11-11T11:46:14Z] + pushd /tmp/kubefire /tmp/kubefire /root INFO[2022-11-11T11:46:14Z] + trap cleanup EXIT ERR INT TERM INFO[2022-11-11T11:46:14Z] + check_virtualization + _is_arm_arch INFO[2022-11-11T11:46:14Z] + uname -m INFO[2022-11-11T11:46:14Z] + grep aarch64 INFO[2022-11-11T11:46:14Z] + return 1 INFO[2022-11-11T11:46:14Z] + lscpu INFO[2022-11-11T11:46:14Z] + grep 'Virtuali[s|z]ation' INFO[2022-11-11T11:46:14Z] Virtualization: VT-x Virtualization type: full INFO[2022-11-11T11:46:14Z] + lsmod INFO[2022-11-11T11:46:14Z] + grep kvm INFO[2022-11-11T11:46:14Z] kvm_intel 372736 0 kvm 1028096 1 kvm_intel INFO[2022-11-11T11:46:14Z] + install_runc + _check_version /usr/local/bin/runc -version v1.1.3 INFO[2022-11-11T11:46:14Z] + set +o pipefail + local exec_name=/usr/local/bin/runc + local exec_version_cmd=-version + local version=v1.1.3 + command -v /usr/local/bin/runc + return 1 + _is_arm_arch INFO[2022-11-11T11:46:14Z] + uname -m INFO[2022-11-11T11:46:14Z] + grep aarch64 INFO[2022-11-11T11:46:14Z] + return 1 INFO[2022-11-11T11:46:14Z] + curl -sfSL https://github.com/opencontainers/runc/releases/download/v1.1.3/runc.amd64 -o runc INFO[2022-11-11T11:46:14Z] + chmod +x runc INFO[2022-11-11T11:46:14Z] + sudo mv runc /usr/local/bin/ INFO[2022-11-11T11:46:14Z] + install_containerd + _check_version /usr/local/bin/containerd --version v1.6.6 INFO[2022-11-11T11:46:14Z] + set +o pipefail + local exec_name=/usr/local/bin/containerd + local exec_version_cmd=--version + local version=v1.6.6 + command -v /usr/local/bin/containerd + return 1 + local version=1.6.6 + local dir=containerd-1.6.6 + _is_arm_arch INFO[2022-11-11T11:46:14Z] + uname -m INFO[2022-11-11T11:46:14Z] + grep aarch64 INFO[2022-11-11T11:46:14Z] + return 1 INFO[2022-11-11T11:46:14Z] + curl -sfSLO https://github.com/containerd/containerd/releases/download/v1.6.6/containerd-1.6.6-linux-amd64.tar.gz INFO[2022-11-11T11:46:15Z] + mkdir -p containerd-1.6.6 INFO[2022-11-11T11:46:15Z] + tar -zxvf containerd-1.6.6-linux-amd64.tar.gz -C containerd-1.6.6 INFO[2022-11-11T11:46:15Z] bin/ bin/containerd-shim INFO[2022-11-11T11:46:15Z] bin/containerd INFO[2022-11-11T11:46:16Z] bin/containerd-shim-runc-v1 INFO[2022-11-11T11:46:16Z] bin/containerd-stress INFO[2022-11-11T11:46:16Z] bin/containerd-shim-runc-v2 INFO[2022-11-11T11:46:16Z] bin/ctr INFO[2022-11-11T11:46:17Z] + chmod +x containerd-1.6.6/bin/containerd containerd-1.6.6/bin/containerd-shim containerd-1.6.6/bin/containerd-shim-runc-v1 containerd-1.6.6/bin/containerd-shim-runc-v2 containerd-1.6.6/bin/containerd-stress containerd-1.6.6/bin/ctr INFO[2022-11-11T11:46:17Z] + sudo mv containerd-1.6.6/bin/containerd containerd-1.6.6/bin/containerd-shim containerd-1.6.6/bin/containerd-shim-runc-v1 containerd-1.6.6/bin/containerd-shim-runc-v2 containerd-1.6.6/bin/containerd-stress containerd-1.6.6/bin/ctr /usr/local/bin/ INFO[2022-11-11T11:46:17Z] + curl -sfSLO https://raw.githubusercontent.com/containerd/containerd/v1.6.6/containerd.service INFO[2022-11-11T11:46:17Z] + sudo groupadd containerd INFO[2022-11-11T11:46:17Z] + sudo mv containerd.service /etc/systemd/system/containerd.service INFO[2022-11-11T11:46:17Z] ++ command -v chgrp INFO[2022-11-11T11:46:17Z] ++ tr -d '\n' INFO[2022-11-11T11:46:17Z] + chgrp_path=/usr/bin/chgrp INFO[2022-11-11T11:46:17Z] + sudo sed -i -E 's#(ExecStart=/usr/local/bin/containerd)#\1\nExecStartPost=/usr/bin/chgrp containerd /run/containerd/containerd.sock#g' /etc/systemd/system/containerd.service INFO[2022-11-11T11:46:17Z] + sudo mkdir -p /etc/containerd INFO[2022-11-11T11:46:17Z] + containerd config default INFO[2022-11-11T11:46:17Z] + sudo tee /etc/containerd/config.toml INFO[2022-11-11T11:46:17Z] + sudo systemctl enable --now containerd INFO[2022-11-11T11:46:17Z] Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service. INFO[2022-11-11T11:46:17Z] + install_cni + _check_version /opt/cni/bin/bridge --version v1.1.1 + set +o pipefail INFO[2022-11-11T11:46:17Z] + local exec_name=/opt/cni/bin/bridge + local exec_version_cmd=--version + local version=v1.1.1 + command -v /opt/cni/bin/bridge INFO[2022-11-11T11:46:17Z] + return 1 INFO[2022-11-11T11:46:17Z] + mkdir -p /opt/cni/bin INFO[2022-11-11T11:46:17Z] + local f=https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz + _is_arm_arch INFO[2022-11-11T11:46:17Z] + uname -m INFO[2022-11-11T11:46:17Z] + grep aarch64 INFO[2022-11-11T11:46:17Z] + return 1 INFO[2022-11-11T11:46:17Z] + curl -sfSL https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz INFO[2022-11-11T11:46:17Z] + tar -C /opt/cni/bin -xz INFO[2022-11-11T11:46:19Z] + install_cni_patches + _is_arm_arch INFO[2022-11-11T11:46:19Z] + uname -m INFO[2022-11-11T11:46:19Z] + grep aarch64 INFO[2022-11-11T11:46:19Z] + return 1 + curl -o host-local-rev -sfSL https://github.com/innobead/kubefire/releases/download/v0.3.8/host-local-rev-linux-amd64 INFO[2022-11-11T11:46:19Z] + chmod +x host-local-rev INFO[2022-11-11T11:46:19Z] + sudo mv host-local-rev /opt/cni/bin/ INFO[2022-11-11T11:46:19Z] + install_ignite + _check_version /usr/local/bin/ignite version v0.10.0 + set +o pipefail INFO[2022-11-11T11:46:19Z] + local exec_name=/usr/local/bin/ignite + local exec_version_cmd=version + local version=v0.10.0 + command -v /usr/local/bin/ignite + return 1 INFO[2022-11-11T11:46:19Z] + for binary in ignite ignited + echo 'Installing ignite...' INFO[2022-11-11T11:46:19Z] Installing ignite... INFO[2022-11-11T11:46:19Z] + local f=https://github.com/weaveworks/ignite/releases/download/v0.10.0/ignite-amd64 + _is_arm_arch INFO[2022-11-11T11:46:19Z] + uname -m INFO[2022-11-11T11:46:19Z] + grep aarch64 INFO[2022-11-11T11:46:19Z] + return 1 + curl -sfSLo ignite https://github.com/weaveworks/ignite/releases/download/v0.10.0/ignite-amd64 INFO[2022-11-11T11:46:20Z] + chmod +x ignite INFO[2022-11-11T11:46:20Z] + sudo mv ignite /usr/local/bin INFO[2022-11-11T11:46:20Z] + for binary in ignite ignited + echo 'Installing ignited...' Installing ignited... + local f=https://github.com/weaveworks/ignite/releases/download/v0.10.0/ignited-amd64 INFO[2022-11-11T11:46:20Z] + _is_arm_arch INFO[2022-11-11T11:46:20Z] + grep aarch64 + uname -m INFO[2022-11-11T11:46:20Z] + return 1 + curl -sfSLo ignited https://github.com/weaveworks/ignite/releases/download/v0.10.0/ignited-amd64 INFO[2022-11-11T11:46:21Z] + chmod +x ignited INFO[2022-11-11T11:46:21Z] + sudo mv ignited /usr/local/bin INFO[2022-11-11T11:46:21Z] + check_ignite + ignite version INFO[2022-11-11T11:46:21Z] Ignite version: version.Info{Major:"0", Minor:"10", GitVersion:"v0.10.0", GitCommit:"4540abeb9ba6daba32a72ef2b799095c71ebacb0", GitTreeState:"clean", BuildDate:"2021-07-19T20:52:59Z", GoVersion:"go1.16.3", Compiler:"gc", Platform:"linux/amd64", SandboxImage:version.Image{Name:"weaveworks/ignite", Tag:"v0.10.0", Delimeter:":"}, KernelImage:version.Image{Name:"weaveworks/ignite-kernel", Tag:"5.10.51", Delimeter:":"}} INFO[2022-11-11T11:46:21Z] Firecracker version: v0.22.4 INFO[2022-11-11T11:46:21Z] + create_cni_default_config INFO[2022-11-11T11:46:21Z] + mkdir -p /etc/cni/net.d/ INFO[2022-11-11T11:46:21Z] + sudo cat INFO[2022-11-11T11:46:21Z] + popd /root + cleanup INFO[2022-11-11T11:46:21Z] + rm -rf /tmp/kubefire

What are some alternatives?

When comparing buildkit and runc you can also consider the following projects:

crun - A fast and lightweight fully featured OCI runtime and C library for running containers

buildah - A tool that facilitates building OCI images.

kaniko - Build Container Images In Kubernetes

jib - 🏗 Build container images for your Java applications.

buildx - Docker CLI plugin for extended build capabilities with BuildKit

podman - Podman: A tool for managing OCI containers and pods.

nerdctl - contaiNERD CTL - Docker-compatible CLI for containerd, with support for Compose, Rootless, eStargz, OCIcrypt, IPFS, ...

amazon-ecr-login - Logs into Amazon ECR with the local Docker client.

setup-buildx-action - GitHub Action to set up Docker Buildx

dive - A tool for exploring each layer in a docker image

source-to-image - A tool for building artifacts from source and injecting into container images

maven-mvnd - Apache Maven Daemon