inferencedb
mlrun
Our great sponsors
inferencedb | mlrun | |
---|---|---|
9 | 3 | |
77 | 1,294 | |
- | 6.0% | |
0.0 | 9.9 | |
almost 2 years ago | 2 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
inferencedb
- [P] InferenceDB - Makes it easy to store predictions of real-time ML models in S3
- InferenceDB: Stream inferences of real-time ML models to S3 using Kafka
- InferenceDB: Stream inferences of real-time ML models in production to any data lake π
- InferenceDB β Stream ML inferences to S3 or any data lake with CRDs
- InferenceDB β Stream predictions from KServe to S3 or any data lakes
- InferenceDB β Stream predictions of real-time ML models to data lakes
mlrun
- Discussion on Need of Feature Stores
-
I reviewed 50+ open-source MLOps tools. Hereβs the result
You should also add MLRun: https://github.com/mlrun/mlrun
- Has anyone here been able to deploy Mlrun successfully on Kubernetes cluster?
What are some alternatives?
kfserving - Standardized Serverless ML Inference Platform on Kubernetes [Moved to: https://github.com/kserve/kserve]
feast - Feature Store for Machine Learning
dagster-example-pipeline - Template Dagster repo using poetry and a single Docker container; works well with CICD
flyte - Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.
SmartSim - SmartSim Infrastructure Library.
phidata - Build AI Assistants with memory, knowledge and tools.
mosec - A high-performance ML model serving framework, offers dynamic batching and CPU/GPU pipelines to fully exploit your compute machine
ploomber - The fastest β‘οΈ way to build data pipelines. Develop iteratively, deploy anywhere. βοΈ
Prefect - The easiest way to build, run, and monitor data pipelines at scale.
Media-Recommendation-Engine - A Recommendation Engine API that can be used to recommend movies, music, games, manga, anime, comics, tv shows and books. Deployed using an AWS EC2 instance.
loopquest - A Production Tool for Embodied AI
AquilaHub - Load and serve Neural Encoder Models