The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning. Learn more →
Kserve Alternatives
Similar projects and alternatives to kserve
-
-
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
server
The Triton Inference Server provides an optimized cloud and edge inferencing solution. (by triton-inference-server)
-
-
aws-virtual-gpu-device-plugin
Discontinued AWS virtual gpu device plugin provides capability to use smaller virtual gpus for your machine learning inference workloads
-
awesome-mlops
:sunglasses: A curated list of awesome MLOps tools (by kelvins)
-
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
Python-Schema-Matching
A python tool using XGboost and sentence-transformers to perform schema matching task on tables.
-
-
client
Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala. (by triton-inference-server)
-
model_analyzer
Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Server models.
-
-
mpi-operator
Kubernetes Operator for MPI-based applications (distributed training, HPC, etc.)
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
kserve reviews and mentions
-
Show HN: Software for Remote GPU-over-IP
Inference servers essentially turn a model running on CPU and/or GPU hardware into a microservice.
Many of them support the kserve API standard[0] that supports everything from model loading/unloading to (of course) inference requests across models, versions, frameworks, etc.
So in the case of Triton[1] you can have any number of different TensorFlow/torch/tensorrt/onnx/etc models, versions, and variants. You can have one or more Triton instances running on hardware with access to local GPUs (for this example). Then you can put standard REST and or grpc load balancers (or whatever you want) in front of them, hit them via another API, whatever.
Now all your applications need to do to perform inference is do an HTTP POST (or use a client[2]) for model input, Triton runs it on a GPU (or CPU if you want), and you get back whatever the model output is.
Not a sales pitch for Triton but it (like some others) can also do things like dynamic batching with QoS parameters, automated model profiling and performance optimization[3], really granular control over resources, response caching, python middleware for application/biz logic, accelerated media processing with Nvidia DALI, all kinds of stuff.
[0] - https://github.com/kserve/kserve
[1] - https://github.com/triton-inference-server/server
[2] - https://github.com/triton-inference-server/client
[3] - https://github.com/triton-inference-server/model_analyzer
-
Run your first Kubeflow pipeline
Kubeflow has multiple components: central dashboard, Kubeflow Notebooks to manage Jupyter notebooks, Kubeflow Pipelines for building and deploying portable, scalable machine learning (ML) workflows based on Docker containers, KF Serving for model serving (apparently superseded by KServe), Katib for hyperparameter tuning and model search, and training operators such as TFJob for training TF models on Kubernetes.
-
[D] Serverless solutions for GPU inference (if there's such a thing)
If you can run on Kubernetes then KFServing is an open source solution that allows for GPU inference and is built upon Knative to allow scale to zero for GPU based inference. From release 0.5 it also has capabilities for multi-model serving as a alpha feature to allow multiple models to share the same server (and via NVIDIA Triton the same GPU).
-
A note from our sponsor - WorkOS
workos.com | 17 Apr 2024
Stats
kserve/kserve is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of kserve is Python.