containerd
minikube
Our great sponsors
containerd | minikube | |
---|---|---|
125 | 76 | |
16,131 | 28,207 | |
3.8% | 1.1% | |
9.9 | 9.9 | |
6 days ago | 6 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
containerd
-
Exploring 5 Docker Alternatives: Containerization Choices for 2024
Containerd and nerdctl
-
The Road To Kubernetes: How Older Technologies Add Up
Kubernetes on the backend used to utilize docker for much of its container runtime solutions. One of the modular features of Kubernetes is the ability to utilize a Container Runtime Interface or CRI. The problem was that Docker didn't really meet the spec properly and they had to maintain a shim to translate properly. Instead users could utilize the popular containerd or cri-o runtimes. These follow the Open Container Initiative or OCI's guidelines on container formats.
-
Fun with Avatars: Containerize the app for deployment & distribution | Part. 2
Container Engine: A runtime that executes and manages containers. Docker and containerd are popular container engines.
-
Complexity by Simplicity - A Deep Dive Into Kubernetes Components
Multiple container runtimes are supported, like conatinerd, cri-o, or other CRI compliant runtimes.
-
macOS Containers v0.0.1
If you really want good adoption, you’ll have to figure out a way for devs to try it out without first having to disable SIP.
Is this related to the code you tried to have merged here: https://github.com/containerd/containerd/pull/8789 ?
This is a failed attempt to upstream part of containerd changes: https://github.com/containerd/containerd/pull/8789
Other part of containerd changes waits for gods-know-what: https://github.com/containerd/containerd/pull/9054
But I haven't gave up yet.
-
Kubernetes Setup With WSL Control Plane and Raspberry Pi Workers
containerd is required by kubernetes to handle containers on its behalf. A big thanks to the HostAfrica blog for the information on setting containerd up for debain. So the containerd install will need to happen on both the WSL2 instance and the Raspberry Pis. For WSL2 you can just install containerd directly:
-
Understanding Docker Architecture: A Beginner's Guide to How Docker Works
Containerd: This is an open-source container runtime to manage a container's lifecycle. Docker and Kubernetes can use Containerd by providing a high-level API for managing containers and a low-level runtime for container orchestration.
-
The advantage of WASM compared with container runtimes
Right now most early examples alas boot a container with a wasm runtime for each wasm instance, which is a sad waste. The whole advantage of wasm should be very lightweight low overhead wasm runtime instances atop a common wasm process. Having a process or container for each instance loses a ton of the benefit, makes it not much better than a regular container.
Thankfully there is work like the Containerd Sandbox API which enables new architectures like this. https://github.com/containerd/containerd/issues/4131
It's still being used to spawn a wasm processes per instance for now, but container runtime project Kuasar is already using the Sandbox API to save significant resources, and has already chimed in in comments on HN to express a desire to have shared-process/multi-wasm-instamxe runtimes, which could indeed allow sub ms spawning that could enable instance per request architectures. https://github.com/kuasar-io/kuasar
-
Best virtualization solution with Ubuntu 22.04
containerd
minikube
-
K3s – Lightweight Kubernetes
If you're just messing around, just use kind (https://kind.sigs.k8s.io) or minikube if you want VMs (https://minikube.sigs.k8s.io). Both work on ARM-based platforms.
You can also use k3s; it's hella easy to get started with and it works great.
-
Developer’s Guide to Building Kubernetes Cloud Apps ☁️🚀
$ minikube addons enable dashboard 💡 dashboard is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub. You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS ▪ Using image docker.io/kubernetesui/dashboard:v2.7.0 ▪ Using image docker.io/kubernetesui/metrics-scraper:v1.0.8 🌟 The 'dashboard' addon is enabled $ minikube addons enable metrics-server 💡 metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub. You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS ▪ Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4 🌟 The 'metrics-server' addon is enabled $ minikube addons enable ingress 💡 ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub. You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS 💡 After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1" ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407 ▪ Using image registry.k8s.io/ingress-nginx/controller:v1.8.1 ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407 🔎 Verifying ingress addon... 🌟 The 'ingress' addon is enabled
-
Implementing TLS in Kubernetes
A Kubernetes distribution: You need to install a Kubernetes distribution to create the Kubernetes cluster and other necessary resources, such as deployments and services. This tutorial uses kind (v0.18.0), but you can use any other Kubernetes distribution, including minikube or K3s.
-
Kube-bench and Popeye: A Power Duo for AKS Security Compliance
> minikube start 😄 minikube v1.22.0 on Darwin 12.6.2 ✨ Using the hyperkit driver based on existing profile 👍 Starting control plane node minikube in cluster minikube 🏃 Updating the running hyperkit "minikube" VM ... 🎉 minikube 1.28.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.28.0 💡 To disable this notice, run: 'minikube config set WantUpdateNotification false' 🐳 Preparing Kubernetes v1.21.2 on Docker 20.10.6 ... 🔎 Verifying Kubernetes components... ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 🌟 Enabled addons: storage-provisioner, default-storageclass ❗ /usr/local/bin/kubectl is version 1.25.2, which may have incompatibilites with Kubernetes 1.21.2. ▪ Want kubectl v1.21.2? Try 'minikube kubectl -- get pods -A' 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default # Download the job.yaml file > curl https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml > job.yaml > kubectl apply -f job.yaml job.batch/kube-bench created > kubectl get pods -A ✔ at minikube ⎈ NAMESPACE NAME READY STATUS RESTARTS AGE default kube-bench-t2fgh 0/1 ContainerCreating 0 5s > kubectl get pods -A ✔ at minikube ⎈ NAMESPACE NAME READY STATUS RESTARTS AGE default kube-bench-t2fgh 0/1 Completed 0 32s
-
Best way to install and use kubernetes for learning
minikube (https://github.com/kubernetes/minikube) - based off of docker machine, uses driver for backend, so can use KVM, Vagrant, or Docker itself to bootstrap K8S cluster.
-
Running Kubernetes locally on M1 Mac
When I run minikube start --driver=docker (having installed the tech preview of Docker Desktop for M1), an initialization error occurs. It seems to me that this is being tracked here https://github.com/kubernetes/minikube/issues/9224.
-
Kubernetes' minikube uses my Go Lang Project!
I am very honored to announce that my Go Language Project Box CLI Maker which makes Highly Customized Boxes for CLI is being used in Kubernetes's minikube which implements a local Kubernetes cluster for Mac OS, Linux and Windows, according to the description.
-
kubelet does not have ClusterDNS IP configured in Microk8s
apiVersion: v1kind: Servicemetadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "KubeDNS"spec: selector: k8s-app: kube-dns clusterIP: 10.152.183.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP---apiVersion: v1kind: ServiceAccountmetadata: name: kube-dns namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile---apiVersion: v1kind: ConfigMapmetadata: name: kube-dns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExistsdata: upstreamNameservers: |- ["8.8.8.8", "8.8.4.4"]# Why set upstream ns: https://github.com/kubernetes/minikube/issues/2027---apiVersion: apps/v1kind: Deploymentmetadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcilespec: # replicas: not specified here: # 1. In order to make Addon Manager do not reconcile this replicas parameter. # 2. Default is 1. # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: rollingUpdate: maxSurge: 10% maxUnavailable: 0 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: tolerations: - key: "CriticalAddonsOnly" operator: "Exists" volumes: - name: kube-dns-config configMap: name: kube-dns optional: true containers: - name: kubedns image: gcr.io/google-containers/k8s-dns-kube-dns:1.15.8 resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: memory: 170Mi requests: cpu: 100m memory: 70Mi livenessProbe: httpGet: path: /healthcheck/kubedns port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that's available. initialDelaySeconds: 3 timeoutSeconds: 5 args: - --domain=cluster.local. - --dns-port=10053 - --config-dir=/kube-dns-config - --v=2 env: - name: PROMETHEUS\_PORT value: "10055" ports: - containerPort: 10053 name: dns-local protocol: UDP - containerPort: 10053 name: dns-tcp-local protocol: TCP - containerPort: 10055 name: metrics protocol: TCP volumeMounts: - name: kube-dns-config mountPath: /kube-dns-config - name: dnsmasq image: gcr.io/google-containers/k8s-dns-dnsmasq-nanny:1.15.8 livenessProbe: httpGet: path: /healthcheck/dnsmasq port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - -v=2 - -logtostderr - -configDir=/etc/k8s/dns/dnsmasq-nanny - -restartDnsmasq=true - -- - -k - --cache-size=1000 - --no-negcache - --log-facility=- - --server=/cluster.local/127.0.0.1#10053 - --server=/in-addr.arpa/127.0.0.1#10053 - --server=/ip6.arpa/127.0.0.1#10053 ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP # see: https://github.com/kubernetes/kubernetes/issues/29055 for details resources: requests: cpu: 150m memory: 20Mi volumeMounts: - name: kube-dns-config mountPath: /etc/k8s/dns/dnsmasq-nanny - name: sidecar image: gcr.io/google-containers/k8s-dns-sidecar:1.15.8 livenessProbe: httpGet: path: /metrics port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - --v=2 - --logtostderr - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV ports: - containerPort: 10054 name: metrics protocol: TCP resources: requests: memory: 20Mi cpu: 10m dnsPolicy: Default # Don't use cluster DNS. serviceAccountName: kube-dns Please let me know what I'm missing.
-
Kubernetes Series (Part 1) : Basics of Kubernetes & its architecture
If you are a Docker toolbox user on Windows, install minikube & then install kubectl
-
Deploy Kubernetes Resources in Minikube cluster using Terraform
$ minikube start 😄 minikube v1.24.0 on Ubuntu 21.04 ▪ KUBECONFIG=$USERHOME/.kube/config 🎉 minikube 1.26.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.26.0 💡 To disable this notice, run: 'minikube config set WantUpdateNotification false' ✨ Using the docker driver based on existing profile 👍 Starting control plane node minikube in cluster minikube 🚜 Pulling base image ... 🔄 Restarting existing docker container for "minikube" ... 🐳 Preparing Kubernetes v1.22.3 on Docker 20.10.8 ... 🔎 Verifying Kubernetes components... ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 🌟 Enabled addons: default-storageclass, storage-provisioner ❗ /snap/bin/kubectl is version 1.24.2, which may have incompatibilites with Kubernetes 1.22.3. ▪ Want kubectl v1.22.3? Try 'minikube kubectl -- get pods -A' 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
What are some alternatives?
colima - Container runtimes on macOS (and Linux) with minimal setup
podman - Podman: A tool for managing OCI containers and pods.
lima - Linux virtual machines, with a focus on running containers
kind - Kubernetes IN Docker - local clusters for testing Kubernetes
cri-o - Open Container Initiative-based implementation of Kubernetes Container Runtime Interface
Moby - The Moby Project - a collaborative project for the container ecosystem to assemble container-based systems
kubespray - Deploy a Production Ready Kubernetes Cluster
k9s - 🐶 Kubernetes CLI To Manage Your Clusters In Style!
podman-compose - a script to run docker-compose.yml using podman
sysbox - An open-source, next-generation "runc" that empowers rootless containers to run workloads such as Systemd, Docker, Kubernetes, just like VMs.
helm - The Kubernetes Package Manager