kfserving
Standardized Serverless ML Inference Platform on Kubernetes [Moved to: https://github.com/kserve/kserve] (by kubeflow)
inferencedb
🚀 Stream inferences of real-time ML models in production to any data lake (Experimental) (by aporia-ai)
kfserving | inferencedb | |
---|---|---|
1 | 9 | |
2,113 | 77 | |
- | - | |
10.0 | 0.0 | |
about 1 year ago | almost 2 years ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kfserving
Posts with mentions or reviews of kfserving.
We have used some of these posts to build our list of alternatives
and similar projects.
-
How do we assign pods properly so that KFServing can scale down GPU Instances to zero?
We are using KFServing as well. KFServing allows us to auto-scale our GPU up and down, specifically scaling to zero when its not in use. The components in KFServing also get assigned to GPU nodes when applying them to our cluster.
inferencedb
Posts with mentions or reviews of inferencedb.
We have used some of these posts to build our list of alternatives
and similar projects.
- [P] InferenceDB - Makes it easy to store predictions of real-time ML models in S3
- InferenceDB: Stream inferences of real-time ML models to S3 using Kafka
- InferenceDB: Stream inferences of real-time ML models in production to any data lake 🚀
- InferenceDB – Stream ML inferences to S3 or any data lake with CRDs
- InferenceDB – Stream predictions from KServe to S3 or any data lakes
- InferenceDB – Stream predictions of real-time ML models to data lakes
What are some alternatives?
When comparing kfserving and inferencedb you can also consider the following projects:
soopervisor - ☁️ Export Ploomber pipelines to Kubernetes (Argo), Airflow, AWS Batch, SLURM, and Kubeflow.
kserve - Standardized Serverless ML Inference Platform on Kubernetes
mosec - A high-performance ML model serving framework, offers dynamic batching and CPU/GPU pipelines to fully exploit your compute machine
examples - 📝 Examples of how to use Neptune for different use cases and with various MLOps tools
BentoML - The most flexible way to serve AI/ML models in production - Build Model Inference Service, LLM APIs, Inference Graph/Pipelines, Compound AI systems, Multi-Modal, RAG as a Service, and more!
community - Information about the Kubeflow community including proposals and governance information.