Cluster

Open-source projects categorized as Cluster

Top 23 Cluster Open-Source Projects

  • minikube

    Run Kubernetes locally

    Project mention: minikube on Asahi / Arch linux | reddit.com/r/AsahiLinux | 2022-12-04

    minikube start --cpus 4 --memory 8192 --container-runtime "cri-o" 😄 minikube v1.28.0 on Arch (arm64) ✨ Automatically selected the docker driver 📌 Using rootless Docker driver 👍 Starting control plane node minikube in cluster minikube 🚜 Pulling base image ... 🔥 Creating docker container (CPUs=4, Memory=8192MB) ... 🎁 Preparing Kubernetes v1.25.3 on CRI-O 1.24.3 ... ▪ Generating certificates and keys ... ▪ Booting up control plane ... 💢 initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.25.3 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 6.1.0-rc6-asahi-5-1-ARCH CONFIG_NAMESPACES: enabled CONFIG_NET_NS: enabled CONFIG_PID_NS: enabled CONFIG_IPC_NS: enabled CONFIG_UTS_NS: enabled CONFIG_CGROUPS: enabled CONFIG_CGROUP_CPUACCT: enabled CONFIG_CGROUP_DEVICE: enabled CONFIG_CGROUP_FREEZER: enabled CONFIG_CGROUP_PIDS: enabled CONFIG_CGROUP_SCHED: enabled CONFIG_CPUSETS: enabled CONFIG_MEMCG: enabled CONFIG_INET: enabled CONFIG_EXT4_FS: enabled CONFIG_PROC_FS: enabled CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module) CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module) CONFIG_FAIR_GROUP_SCHED: enabled CONFIG_OVERLAY_FS: enabled (as module) CONFIG_AUFS_FS: not set - Required for aufs. CONFIG_BLK_DEV_DM: enabled CONFIG_CFS_BANDWIDTH: enabled CONFIG_CGROUP_HUGETLB: enabled CONFIG_SECCOMP: enabled CONFIG_SECCOMP_FILTER: enabled OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUSET: missing CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_PIDS: enabled CGROUPS_HUGETLB: missing CGROUPS_BLKIO: missing [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID' stderr: W1204 10:08:33.608317 734 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration! [WARNING SystemVerification]: missing optional cgroups: hugetlb blkio [WARNING SystemVerification]: missing required cgroups: cpuset [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W1204 10:08:36.411340 734 kubelet.go:63] [kubelet-start] WARNING: unable to stop the kubelet service momentarily: [exit status 5] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher ▪ Generating certificates and keys ... ▪ Booting up control plane ... 💣 Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.25.3 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 6.1.0-rc6-asahi-5-1-ARCH CONFIG_NAMESPACES: enabled CONFIG_NET_NS: enabled CONFIG_PID_NS: enabled CONFIG_IPC_NS: enabled CONFIG_UTS_NS: enabled CONFIG_CGROUPS: enabled CONFIG_CGROUP_CPUACCT: enabled CONFIG_CGROUP_DEVICE: enabled CONFIG_CGROUP_FREEZER: enabled CONFIG_CGROUP_PIDS: enabled CONFIG_CGROUP_SCHED: enabled CONFIG_CPUSETS: enabled CONFIG_MEMCG: enabled CONFIG_INET: enabled CONFIG_EXT4_FS: enabled CONFIG_PROC_FS: enabled CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module) CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module) CONFIG_FAIR_GROUP_SCHED: enabled CONFIG_OVERLAY_FS: enabled (as module) CONFIG_AUFS_FS: not set - Required for aufs. CONFIG_BLK_DEV_DM: enabled CONFIG_CFS_BANDWIDTH: enabled CONFIG_CGROUP_HUGETLB: enabled CONFIG_SECCOMP: enabled CONFIG_SECCOMP_FILTER: enabled OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUSET: missing CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_PIDS: enabled CGROUPS_HUGETLB: missing CGROUPS_BLKIO: missing [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID' stderr: W1204 10:12:38.493957 3093 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration! [WARNING SystemVerification]: missing optional cgroups: hugetlb blkio [WARNING SystemVerification]: missing required cgroups: cpuset [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher ╭───────────────────────────────────────────────────────────────────────────────────────────╮ │ │ │ 😿 If the above advice does not help, please let us know: │ │ 👉 https://github.com/kubernetes/minikube/issues/new/choose │ │ │ │ Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │ │ │ ╰───────────────────────────────────────────────────────────────────────────────────────────╯ ❌ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.25.3 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 6.1.0-rc6-asahi-5-1-ARCH CONFIG_NAMESPACES: enabled CONFIG_NET_NS: enabled CONFIG_PID_NS: enabled CONFIG_IPC_NS: enabled CONFIG_UTS_NS: enabled CONFIG_CGROUPS: enabled CONFIG_CGROUP_CPUACCT: enabled CONFIG_CGROUP_DEVICE: enabled CONFIG_CGROUP_FREEZER: enabled CONFIG_CGROUP_PIDS: enabled CONFIG_CGROUP_SCHED: enabled CONFIG_CPUSETS: enabled CONFIG_MEMCG: enabled CONFIG_INET: enabled CONFIG_EXT4_FS: enabled CONFIG_PROC_FS: enabled CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module) CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module) CONFIG_FAIR_GROUP_SCHED: enabled CONFIG_OVERLAY_FS: enabled (as module) CONFIG_AUFS_FS: not set - Required for aufs. CONFIG_BLK_DEV_DM: enabled CONFIG_CFS_BANDWIDTH: enabled CONFIG_CGROUP_HUGETLB: enabled CONFIG_SECCOMP: enabled CONFIG_SECCOMP_FILTER: enabled OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUSET: missing CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_PIDS: enabled CGROUPS_HUGETLB: missing CGROUPS_BLKIO: missing [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID' stderr: W1204 10:12:38.493957 3093 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration! [WARNING SystemVerification]: missing optional cgroups: hugetlb blkio [WARNING SystemVerification]: missing required cgroups: cpuset [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher 💡 Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start 🍿 Related issue: https://github.com/kubernetes/minikube/issues/4172

  • TDengine

    TDengine is an open source, high-performance, cloud native time-series database optimized for Internet of Things (IoT), Connected Cars, Industrial IoT and DevOps.

    Project mention: TDengine: NEW Data - star count:19895.0 | reddit.com/r/algoprojects | 2022-12-03
  • Scout APM

    Truly a developer’s best friend. Scout APM is great for developers who want to find and fix performance issues in their applications. With Scout, we'll take care of the bugs so you can focus on building great things 🚀.

  • Gravitational Teleport

    The easiest, most secure way to access infrastructure.

    Project mention: Apache Guacamole (or other) to browse internal web-GUIs? | reddit.com/r/homelab | 2022-11-27

    Dunno if Teleport does it. I've been meaning to deploy it for a while and will be looking in getting it in soon. https://github.com/gravitational/teleport

  • phpredis

    A PHP extension for Redis

    Project mention: how to handle millions of data read/writes everyday? | reddit.com/r/laravel | 2022-08-11

    And then, read up on the PHPRedis Extension to see how easy it is to use -> https://github.com/phpredis/phpredis

  • VictoriaMetrics

    VictoriaMetrics: fast, cost-effective monitoring solution and time series database

    Project mention: Monitoring Microservices with Prometheus and Grafana | news.ycombinator.com | 2022-12-09

    I really don't get why "scrape the prometheus endpoint" is a go-to now, push model seems to be way less PITA to manage at scale.

    > If you get serious about Prometheus, eventually you will want longer data retention, checkout https://thanos.io/

    Any idea how it compares with https://victoriametrics.com/ ?

    We're slowly looking for a replacement for InfluxDB (as 1.8 is essentially on life support), the low disk footprint is pretty big advantage here.

  • guide

    Kubernetes clusters for the hobbyist. (by hobby-kube)

  • Akka.net

    Canonical actor model implementation for .NET with local + distributed actors in C# and F#.

    Project mention: Using functional extensions in production C# code? | reddit.com/r/csharp | 2022-08-26

    However, I've found that sometimes, they are a little -too- functional. I'm a bit more preferential to Akka.Net's implementation of Option and Try, if only because they have good 'escape hatches' where you interrogate them in a more procedural manner.

  • SonarLint

    Clean code begins in your IDE with SonarLint. Up your coding game and discover issues early. SonarLint is a free plugin that helps you find & fix bugs and security issues from the moment you start writing code. Install from your favorite IDE marketplace today.

  • k3d

    Little helper to run CNCF's k3s in Docker

    Project mention: Advice needed: a good alternative to Raspberry Pi cluster for homelab | reddit.com/r/homelab | 2022-12-06
  • TensorFlowOnSpark

    TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters.

    Project mention: [D]Speed up inference on Spark | reddit.com/r/MachineLearning | 2022-02-18

    Currently I use TensorflowOnSpark frame to train and predict model. When prediction, I have billions of samples to predict which is time-consuming. I wonder if there is some good practices on this.

  • Crate

    CrateDB is a distributed SQL database that makes it simple to store and analyze massive amounts of machine data in real-time. Built on top of Lucene.

    Project mention: Distributed query execution in CrateDB: What you need to know | dev.to | 2022-07-20

    A logical execution plan does not take into account the information about data distribution. CrateDB is a distributed database and data is sharded: a table can be split into many parts - so-called shards. Shards can be independently replicated and moved from one node to another. The number of shards a table can have is specified at the time the table is created.

  • postgres-operator

    Postgres operator creates and manages PostgreSQL clusters running in Kubernetes (by zalando)

    Project mention: Best way for high-available database at home? | reddit.com/r/selfhosted | 2022-11-29

    I don't have much experience with HA databases, so I can't really decide which way I should go. I found a postgres-operator to be run on a kubernetes cluster: https://github.com/zalando/postgres-operator. And a guide to setup postgres HA with patroni: https://arctype.com/blog/postgres-patroni/

  • polaris

    Validation of best practices in your Kubernetes clusters (by FairwindsOps)

    Project mention: Is OPA Gatekeeper the best solution for writing policies for k8s clusters? | reddit.com/r/kubernetes | 2022-11-10
  • puppeteer-cluster

    Puppeteer Pool, run a cluster of instances in parallel

    Project mention: Looking for something and I'm not sure what it would be called..... | reddit.com/r/selfhosted | 2022-11-13

    You could set up a service with something like playwright, puppeteer puppeteer-cluster, browserless, to access the service internally and serve screenshots of it to the outside user. You'd have to set up probably some kind of web service with the appropriate routes.

  • godis

    A Golang implemented Redis Server and Cluster. Go 语言实现的 Redis 服务器和分布式集群

    Project mention: Open Source Databases in Go | reddit.com/r/golang | 2022-06-08

    godis - A Golang implemented high-performance Redis server and cluster.

  • gardener

    Kubernetes-native system managing the full lifecycle of conformant Kubernetes clusters as a service on Alicloud, AWS, Azure, GCP, OpenStack, EquinixMetal, vSphere, MetalStack, and Kubevirt with minimal TCO.

    Project mention: Where can I find managed K8s for the price of managed ECS? | reddit.com/r/kubernetes | 2022-09-28
  • ActionHero

    Actionhero is a realtime multi-transport nodejs API Server with integrated cluster capabilities and delayed tasks

  • dcos

    DC/OS - The Datacenter Operating System

  • icinga2

    The core of our monitoring platform with a powerful configuration language and REST API.

    Project mention: Linux server monitoring suggestions | reddit.com/r/selfhosted | 2022-07-28
  • kubicorn

    Simple, cloud native infrastructure for Kubernetes.

    Project mention: Best way to install and use kubernetes for learning | reddit.com/r/kubernetes | 2022-11-12

    Kubicorn (https://github.com/kubicorn/kubicorn)

  • raspberry-pi-dramble

    Raspberry Pi Kubernetes cluster that runs HA/HP Drupal 8

    Project mention: Why is it so hard to find a 5 port PoE switch where all 5 ports are PoE? | reddit.com/r/raspberry_pi | 2022-12-06

    https://www.pidramble.com - but you do need to power the router. 5th port is an uplink

  • ksync

    Sync files between your local system and a kubernetes cluster. (by ksync)

    Project mention: Connect to local AWX manager pod or how to access config/repos | reddit.com/r/awx | 2022-05-05

    Ksync

  • kube-no-trouble

    Easily check your clusters for use of deprecated APIs

    Project mention: Kubernetes 1.21 - Going EOL on major cloud providers in early 2023 | reddit.com/r/kubernetes | 2022-12-05
  • Cluster

    Easy Map Annotation Clustering 📍

  • InfluxDB

    Build time-series-based applications quickly and at scale.. InfluxDB is the Time Series Data Platform where developers build real-time applications for analytics, IoT and cloud-native services in less time with less code.

NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020). The latest post mention was on 2022-12-09.

Cluster related posts

Index

What are some of the best open-source Cluster projects? This list will help you:

Project Stars
1 minikube 25,274
2 TDengine 20,197
3 Gravitational Teleport 13,176
4 phpredis 9,529
5 VictoriaMetrics 7,481
6 guide 5,300
7 Akka.net 4,226
8 k3d 3,987
9 TensorFlowOnSpark 3,832
10 Crate 3,564
11 postgres-operator 2,958
12 polaris 2,747
13 puppeteer-cluster 2,636
14 godis 2,374
15 gardener 2,348
16 ActionHero 2,333
17 dcos 2,324
18 icinga2 1,782
19 kubicorn 1,680
20 raspberry-pi-dramble 1,639
21 ksync 1,347
22 kube-no-trouble 1,315
23 Cluster 1,217
Close all those tabs. Zigi will handle your updates.
Zigi monitors Jira and GitHub updates, pings you when PRs need approval and lets you take fast actions - all directly from Slack! Plus it reduces cycle time by up to 75%.
www.zigi.ai