kube-batch
armada
kube-batch | armada | |
---|---|---|
3 | 8 | |
1,057 | 416 | |
- | 3.4% | |
4.0 | 9.7 | |
12 months ago | 1 day ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kube-batch
-
Volcano vs Yunikorn vs Knative
tldr; Knative Batch Job provider should support the respective coscheduling and kube-batch support. We had developed an in-house one for KubeFlow, from scratch. We had added Apache Arrow support into knative-serving with the respective CloudEvents interop layer, natively (i.e. secure shmem via IPC namespace, instead of message passing on the same host). We use it as a direct replacement for Apache Arrow Ballista, and had planned researching further DataFusion compat layer. Almost any modern ETL is pretty dubious without Apache Arrow.
-
Kubernetes Was Never Designed for Batch Jobs
Another aspect of batch jobs is that we’ll often want to run distributed computations where we split our data into chunks and run a function on each chunk. One popular option is to run Spark, which is built for exactly this use case, on top of Kubernetes. And there are other options for additional software to make running distributed computations on Kubernetes easier.
-
Scaling Kubernetes to 7,500 Nodes
> That said, strain on the kube-scheduler is spiky. A new job may consist of many hundreds of pods all being created at once, then return to a relatively low rate of churn.
Last I checked, the default scheduler places Pods one at a time. It might be advantageous to use a gang/batch scheduler like kube-batch[0], Poseidon[1] or DCM[2].
[0] https://github.com/kubernetes-sigs/kube-batch
[1] https://github.com/kubernetes-sigs/poseidon
[2] https://github.com/vmware/declarative-cluster-management
armada
-
job scheduling for scientific computing on k8s?
Armada could be an alternative: https://armadaproject.io/
-
OpenAI, Scaling Kubernetes to 7,500 nodes
To overcome the limitations on cluster size in Kubernetes, folks may want to look at the Armada Project ( https://armadaproject.io/ ). Armada is a
- Kubernetes was never designed for batch jobs
-
Kubernetes Was Never Designed for Batch Jobs
Another aspect of batch jobs is that we’ll often want to run distributed computations where we split our data into chunks and run a function on each chunk. One popular option is to run Spark, which is built for exactly this use case, on top of Kubernetes. And there are other options for additional software to make running distributed computations on Kubernetes easier.
- Armada
-
Karmada: Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration
The naming sounds very similar to this project: https://github.com/G-Research/armada
- Queue batch job that would exceed namespace quota.
What are some alternatives?
volcano - A Cloud Native Batch System (Project under CNCF)
karmada - Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration
argo - Workflow Engine for Kubernetes
madaidans-insecurities.github.io
mpi-operator - Kubernetes Operator for MPI-based applications (distributed training, HPC, etc.)
kube-scheduler-simulator - The simulator for the Kubernetes scheduler
kueue - Kubernetes-native Job Queueing
sidekick - High Performance HTTP Sidecar Load Balancer
magic-wormhole - get things from one computer to another, safely [Moved to: https://github.com/magic-wormhole/magic-wormhole]
sarus - OCI-compatible engine to deploy Linux containers on HPC environments.
slurm - Slurm: A Highly Scalable Workload Manager