volcano
kube-batch
Our great sponsors
volcano | kube-batch | |
---|---|---|
2 | 3 | |
3,744 | 1,057 | |
3.3% | - | |
9.1 | 4.0 | |
3 days ago | 11 months ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
volcano
-
Can we specify nodeSelector inline for a kubectl command
Also, if you are creating bare pods, this sounds like batch scheduling and you should consider using Jobs instead, to have a pod controller. And then you could also consider the https://volcano.sh/ scheduler if it has a fitting scheduling plugin for your use case.
-
My Journey With Spark On Kubernetes... In Python (1/3)
For our experiments, we will use Volcano which is a batch scheduler for Kubernetes, well-suited for scheduling Spark applications pods with a better efficiency than the default kube-scheduler. The main reason is that Volcano allows "group scheduling" or "gang scheduling": while the default scheduler of Kubernetes schedules containers one by one, Volcano ensures that a gang of related containers (here, the Spark driver and its executors) can be scheduled at the same time. If for any reason it is not possible to deploy all the containers in a gang, Volcano will not schedule that gang. This article explains in more detail the reasons for using Volcano.
kube-batch
-
Volcano vs Yunikorn vs Knative
tldr; Knative Batch Job provider should support the respective coscheduling and kube-batch support. We had developed an in-house one for KubeFlow, from scratch. We had added Apache Arrow support into knative-serving with the respective CloudEvents interop layer, natively (i.e. secure shmem via IPC namespace, instead of message passing on the same host). We use it as a direct replacement for Apache Arrow Ballista, and had planned researching further DataFusion compat layer. Almost any modern ETL is pretty dubious without Apache Arrow.
-
Kubernetes Was Never Designed for Batch Jobs
Another aspect of batch jobs is that we’ll often want to run distributed computations where we split our data into chunks and run a function on each chunk. One popular option is to run Spark, which is built for exactly this use case, on top of Kubernetes. And there are other options for additional software to make running distributed computations on Kubernetes easier.
-
Scaling Kubernetes to 7,500 Nodes
> That said, strain on the kube-scheduler is spiky. A new job may consist of many hundreds of pods all being created at once, then return to a relatively low rate of churn.
Last I checked, the default scheduler places Pods one at a time. It might be advantageous to use a gang/batch scheduler like kube-batch[0], Poseidon[1] or DCM[2].
[0] https://github.com/kubernetes-sigs/kube-batch
[1] https://github.com/kubernetes-sigs/poseidon
[2] https://github.com/vmware/declarative-cluster-management
What are some alternatives?
spark-operator - Kubernetes operator for managing the lifecycle of Apache Spark applications on Kubernetes.
argo - Workflow Engine for Kubernetes
mpi-operator - Kubernetes Operator for MPI-based applications (distributed training, HPC, etc.)
singularity-cri - The Singularity implementation of the Kubernetes Container Runtime Interface
kube-scheduler-simulator - The simulator for the Kubernetes scheduler
warewulf - Warewulf is a stateless and diskless container operating system provisioning system for large clusters of bare metal and/or virtual systems.
sidekick - High Performance HTTP Sidecar Load Balancer
charts - ⚠️(OBSOLETE) Curated applications for Kubernetes
sarus - OCI-compatible engine to deploy Linux containers on HPC environments.
kubernetes-operator-roiergasias - 'Roiergasias' kubernetes operator is meant to address a fundamental requirement of any data science / machine learning project running their pipelines on Kubernetes - which is to quickly provision a declarative data pipeline (on demand) for their various project needs using simple kubectl commands. Basically, implementing the concept of No Ops. The fundamental principle is to utilise best of docker, kubernetes and programming language features to run a workflow with minimal workflow definition syntax. It is a Go based workflow running on command line or Kubernetes with the help of a custom operator for a quick and automated data pipeline for your machine learning projects (a flavor of MLOps).