k8s-device-plugin VS aws-virtual-gpu-device-plugin

Compare k8s-device-plugin vs aws-virtual-gpu-device-plugin and see what are their differences.

k8s-device-plugin

NVIDIA device plugin for Kubernetes (by NVIDIA)

aws-virtual-gpu-device-plugin

AWS virtual gpu device plugin provides capability to use smaller virtual gpus for your machine learning inference workloads (by awslabs)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
k8s-device-plugin aws-virtual-gpu-device-plugin
11 3
2,304 132
4.6% -
9.5 0.0
5 days ago over 1 year ago
Go Jupyter Notebook
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

k8s-device-plugin

Posts with mentions or reviews of k8s-device-plugin. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-02-09.

aws-virtual-gpu-device-plugin

Posts with mentions or reviews of aws-virtual-gpu-device-plugin. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-11-04.
  • Share a GPU between pods on AWS EKS
    10 projects | dev.to | 4 Nov 2021
    This project (available here) uses the k8s device plugin described by this AWS blog post to make GPU-based nodes publish the amount of GPU resource they have available. Instead of the amount of VRAM available or some abstract metric, this plugin advertises the amount of pods/processes that can be connected to the GPU. This is controlled by what is called by NVIDIA as Multi-Process Service (MPS).
  • [D] Serverless solutions for GPU inference (if there's such a thing)
    2 projects | /r/MachineLearning | 22 Feb 2021
    AWS has apparently already started using this type of tech as of this year (see lost below). They mention virtual gpus but this particular solution probably won't help OP unfortunately. https://aws.amazon.com/blogs/opensource/virtual-gpu-device-plugin-for-inference-workload-in-kubernetes/
  • AWS open source news and updates No.41
    13 projects | dev.to | 25 Oct 2020
    The post explores GPU device plugin to address how to set fractional number of GPU resource for each pod by implementing the Kubernetes device plugin and Nvidia MPS. This project has been open sourced on GitHub.

What are some alternatives?

When comparing k8s-device-plugin and aws-virtual-gpu-device-plugin you can also consider the following projects:

kubevirt-gpu-device-plugin - NVIDIA k8s device plugin for Kubevirt

harvester - Open source hyperconverged infrastructure (HCI) software

kserve - Standardized Serverless ML Inference Platform on Kubernetes

aws-eks-share-gpu - How to share the same GPU between pods on AWS EKS

terraform-provider-kubernetes - Terraform Kubernetes provider

containers-roadmap - This is the public roadmap for AWS container services (ECS, ECR, Fargate, and EKS).

determined - Determined is an open-source machine learning platform that simplifies distributed training, hyperparameter tuning, experiment tracking, and resource management. Works with PyTorch and TensorFlow.

asdf-awscli

csi-driver-smb - This driver allows Kubernetes to access SMB Server on both Linux and Windows nodes.

booster - Software development framework specialized in building highly scalable microservices with CQRS and Event-Sourcing. It uses the semantics of the code to build a fully working GraphQL API that supports real-time subscriptions.