Our great sponsors
-
aws-virtual-gpu-device-plugin
Discontinued AWS virtual gpu device plugin provides capability to use smaller virtual gpus for your machine learning inference workloads
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
If you can run on Kubernetes then KFServing is an open source solution that allows for GPU inference and is built upon Knative to allow scale to zero for GPU based inference. From release 0.5 it also has capabilities for multi-model serving as a alpha feature to allow multiple models to share the same server (and via NVIDIA Triton the same GPU).
AWS has apparently already started using this type of tech as of this year (see lost below). They mention virtual gpus but this particular solution probably won't help OP unfortunately. https://aws.amazon.com/blogs/opensource/virtual-gpu-device-plugin-for-inference-workload-in-kubernetes/