minikube
Run Kubernetes locally (by kubernetes)
kubespray
Deploy a Production Ready Kubernetes Cluster (by kubernetes-sigs)
Our great sponsors
minikube | kubespray | |
---|---|---|
76 | 55 | |
28,207 | 15,237 | |
1.1% | 1.7% | |
9.9 | 9.6 | |
6 days ago | 7 days ago | |
Go | Jinja | |
Apache License 2.0 | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
minikube
Posts with mentions or reviews of minikube.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-10-11.
-
K3s β Lightweight Kubernetes
If you're just messing around, just use kind (https://kind.sigs.k8s.io) or minikube if you want VMs (https://minikube.sigs.k8s.io). Both work on ARM-based platforms.
You can also use k3s; it's hella easy to get started with and it works great.
-
Developerβs Guide to Building Kubernetes Cloud Apps βοΈπ
$ minikube addons enable dashboard π‘ dashboard is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub. You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS βͺ Using image docker.io/kubernetesui/dashboard:v2.7.0 βͺ Using image docker.io/kubernetesui/metrics-scraper:v1.0.8 π The 'dashboard' addon is enabled $ minikube addons enable metrics-server π‘ metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub. You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS βͺ Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4 π The 'metrics-server' addon is enabled $ minikube addons enable ingress π‘ ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub. You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS π‘ After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1" βͺ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407 βͺ Using image registry.k8s.io/ingress-nginx/controller:v1.8.1 βͺ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407 π Verifying ingress addon... π The 'ingress' addon is enabled
-
Implementing TLS in Kubernetes
A Kubernetes distribution: You need to install a Kubernetes distribution to create the Kubernetes cluster and other necessary resources, such as deployments and services. This tutorial uses kind (v0.18.0), but you can use any other Kubernetes distribution, including minikube or K3s.
-
Kube-bench and Popeye: A Power Duo for AKS Security Compliance
> minikube start π minikube v1.22.0 on Darwin 12.6.2 β¨ Using the hyperkit driver based on existing profile π Starting control plane node minikube in cluster minikube π Updating the running hyperkit "minikube" VM ... π minikube 1.28.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.28.0 π‘ To disable this notice, run: 'minikube config set WantUpdateNotification false' π³ Preparing Kubernetes v1.21.2 on Docker 20.10.6 ... π Verifying Kubernetes components... βͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5 π Enabled addons: storage-provisioner, default-storageclass β /usr/local/bin/kubectl is version 1.25.2, which may have incompatibilites with Kubernetes 1.21.2. βͺ Want kubectl v1.21.2? Try 'minikube kubectl -- get pods -A' π Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default # Download the job.yaml file > curl https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml > job.yaml > kubectl apply -f job.yaml job.batch/kube-bench created > kubectl get pods -A ξ² β ξ³ at minikube β NAMESPACE NAME READY STATUS RESTARTS AGE default kube-bench-t2fgh 0/1 ContainerCreating 0 5s > kubectl get pods -A ξ² β ξ³ at minikube β NAMESPACE NAME READY STATUS RESTARTS AGE default kube-bench-t2fgh 0/1 Completed 0 32s
-
Best way to install and use kubernetes for learning
minikube (https://github.com/kubernetes/minikube) - based off of docker machine, uses driver for backend, so can use KVM, Vagrant, or Docker itself to bootstrap K8S cluster.
-
Running Kubernetes locally on M1 Mac
When I run minikube start --driver=docker (having installed the tech preview of Docker Desktop for M1), an initialization error occurs. It seems to me that this is being tracked here https://github.com/kubernetes/minikube/issues/9224.
-
Kubernetes' minikube uses my Go Lang Project!
I am very honored to announce that my Go Language Project Box CLI Maker which makes Highly Customized Boxes for CLI is being used in Kubernetes's minikube which implements a local Kubernetes cluster for Mac OS, Linux and Windows, according to the description.
-
kubelet does not have ClusterDNS IP configured in Microk8s
apiVersion: v1kind: Servicemetadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "KubeDNS"spec: selector: k8s-app: kube-dns clusterIP: 10.152.183.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP---apiVersion: v1kind: ServiceAccountmetadata: name: kube-dns namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile---apiVersion: v1kind: ConfigMapmetadata: name: kube-dns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExistsdata: upstreamNameservers: |- ["8.8.8.8", "8.8.4.4"]# Why set upstream ns: https://github.com/kubernetes/minikube/issues/2027---apiVersion: apps/v1kind: Deploymentmetadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcilespec: # replicas: not specified here: # 1. In order to make Addon Manager do not reconcile this replicas parameter. # 2. Default is 1. # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: rollingUpdate: maxSurge: 10% maxUnavailable: 0 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: tolerations: - key: "CriticalAddonsOnly" operator: "Exists" volumes: - name: kube-dns-config configMap: name: kube-dns optional: true containers: - name: kubedns image: gcr.io/google-containers/k8s-dns-kube-dns:1.15.8 resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: memory: 170Mi requests: cpu: 100m memory: 70Mi livenessProbe: httpGet: path: /healthcheck/kubedns port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that's available. initialDelaySeconds: 3 timeoutSeconds: 5 args: - --domain=cluster.local. - --dns-port=10053 - --config-dir=/kube-dns-config - --v=2 env: - name: PROMETHEUS\_PORT value: "10055" ports: - containerPort: 10053 name: dns-local protocol: UDP - containerPort: 10053 name: dns-tcp-local protocol: TCP - containerPort: 10055 name: metrics protocol: TCP volumeMounts: - name: kube-dns-config mountPath: /kube-dns-config - name: dnsmasq image: gcr.io/google-containers/k8s-dns-dnsmasq-nanny:1.15.8 livenessProbe: httpGet: path: /healthcheck/dnsmasq port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - -v=2 - -logtostderr - -configDir=/etc/k8s/dns/dnsmasq-nanny - -restartDnsmasq=true - -- - -k - --cache-size=1000 - --no-negcache - --log-facility=- - --server=/cluster.local/127.0.0.1#10053 - --server=/in-addr.arpa/127.0.0.1#10053 - --server=/ip6.arpa/127.0.0.1#10053 ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP # see: https://github.com/kubernetes/kubernetes/issues/29055 for details resources: requests: cpu: 150m memory: 20Mi volumeMounts: - name: kube-dns-config mountPath: /etc/k8s/dns/dnsmasq-nanny - name: sidecar image: gcr.io/google-containers/k8s-dns-sidecar:1.15.8 livenessProbe: httpGet: path: /metrics port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - --v=2 - --logtostderr - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV ports: - containerPort: 10054 name: metrics protocol: TCP resources: requests: memory: 20Mi cpu: 10m dnsPolicy: Default # Don't use cluster DNS. serviceAccountName: kube-dns Please let me know what I'm missing.
-
Kubernetes Series (Part 1) : Basics of Kubernetes & its architecture
If you are a Docker toolbox user on Windows, install minikube & then install kubectl
-
Deploy Kubernetes Resources in Minikube cluster using Terraform
$ minikube start π minikube v1.24.0 on Ubuntu 21.04 βͺ KUBECONFIG=$USERHOME/.kube/config π minikube 1.26.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.26.0 π‘ To disable this notice, run: 'minikube config set WantUpdateNotification false' β¨ Using the docker driver based on existing profile π Starting control plane node minikube in cluster minikube π Pulling base image ... π Restarting existing docker container for "minikube" ... π³ Preparing Kubernetes v1.22.3 on Docker 20.10.8 ... π Verifying Kubernetes components... βͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5 π Enabled addons: default-storageclass, storage-provisioner β /snap/bin/kubectl is version 1.24.2, which may have incompatibilites with Kubernetes 1.22.3. βͺ Want kubectl v1.22.3? Try 'minikube kubectl -- get pods -A' π Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
kubespray
Posts with mentions or reviews of kubespray.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-01-11.
-
Zarf: K8s in Airgapped Environments
Worth noting that if you like ansible, Kubespray has had documented air-gap installation since 2018 https://github.com/kubernetes-sigs/kubespray/commit/963c3479...
- Ask HN: Options for K8s On-Prem
-
How many of you are running kubernetes on prem?
About 1yr ago I ran k8s with 300 nodes using kube spray https://github.com/kubernetes-sigs/kubespray . Never had any real issue with it. We did finally move to the cloud though.
- Automated Kubernetes installation
-
Building your own Kubernetes distribution
We build our own distro called Compliant Kubernetes, and we use (a fork of) kubespray to install the base Kubernetes layer. Our distro is entirely open source, so you can use it as a reference, if you want.
-
Ansible for provisioning nodes
You may want to look at kubespray. They are ansible playbooks for provisioning clusters.
-
Self-Managed Kubernetes Distributions
No worries! I use AWX with Ansible and love it. For this use case I think that is moving in the direction away from what I'm already using (i.e. more advanced). I don't think I need that level of flexibility when it comes to controlling the lifecycle of K8s nodes. Essentially what I'm looking for is something like managed K8s but leaning more towards the self-managed side to have some more level of control, e.g. easy swapping of the CNI or CSI. Another tricky thing is upgrading nodes, which kubespray has established playbooks for. Upgrading K8s via custom playbooks sounds way over my head right now, I can't see benefit in doing that (for myself personally, of course) over using the kubespray playbooks which are robust.
-
Best way to install and use kubernetes for learning
KubeSpray (https://github.com/kubernetes-sigs/kubespray) - uses Ansible to stand up a Kubernetes cluster
- Spin up a bare metal cluster in 2022
- Kubernetes on Bare Metal
What are some alternatives?
When comparing minikube and kubespray you can also consider the following projects:
colima - Container runtimes on macOS (and Linux) with minimal setup
kubeadm - Aggregator for issues filed against kubeadm
lima - Linux virtual machines, with a focus on running containers
k3s - Lightweight Kubernetes
kind - Kubernetes IN Docker - local clusters for testing Kubernetes
rancher - Complete container management platform
k9s - πΆ Kubernetes CLI To Manage Your Clusters In Style!
kops - Kubernetes Operations (kOps) - Production Grade k8s Installation, Upgrades and Management
rke2
helm - The Kubernetes Package Manager
cluster-api-provider-vsphere
ansible-role-k3s - Ansible role for installing k3s as either a standalone server or HA cluster.