mlrun
mosec
Our great sponsors
mlrun | mosec | |
---|---|---|
3 | 11 | |
1,294 | 701 | |
6.0% | 3.4% | |
9.9 | 8.6 | |
1 day ago | 27 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mlrun
- Discussion on Need of Feature Stores
-
I reviewed 50+ open-source MLOps tools. Here’s the result
You should also add MLRun: https://github.com/mlrun/mlrun
- Has anyone here been able to deploy Mlrun successfully on Kubernetes cluster?
mosec
-
20x Faster as the Beginning: Introducing pgvecto.rs extension written in Rust
Mosec - A high-performance serving framework for ML models, offers dynamic batching and CPU/GPU pipelines to fully exploit your compute machine. Simple and faster alternative to NVIDIA Triton.
-
[D] Handling Concurrent Request for ML Model API
- Yes C++ would be better, but you can try mosec. It has a Python interface and helps you handle all the difficult things about Python multiprocessing. The web service part is implemented in Rust thus it's fast enough for machine learning services.
-
Launching ModelZ Beta!
Contribute to open source projects: Modelz is built on top of envd, mosec, modelz-llm and many other open source projects. If you're interested in contributing to these projects, you can check out their GitHub repositories and start contributing.
-
Deploying a model with an API in docker
You could first create the image with the framework you like (e.g. bentoml or https://github.com/mosecorg/mosec for light weight).
- PostgresML is 8-40x faster than Python HTTP microservices
- Python Machine Learning Service Can Run Way More Faster
-
[D] Open Source ML Organisations to contribute to?
If you're interested in machine learning model serving, can check mosec: https://github.com/mosecorg/mosec
-
Why not multiprocessing
During the development of a machine learning serving project Mosec, I used a lot of multiprocessing to make it more efficient. I want to share some experiences and some researches related to Python multiprocessing.
-
[P] Mosec: deploy your machine learning model in an easy and efficient way
That's a good example. I have met the same situation before. I have created a discussion in GitHub to track the DAG progress.
- Mosec: deploy your machine learning model in an easy and efficient way
What are some alternatives?
feast - Feature Store for Machine Learning
BentoML - The most flexible way to serve AI/ML models in production - Build Model Inference Service, LLM APIs, Inference Graph/Pipelines, Compound AI systems, Multi-Modal, RAG as a Service, and more!
dagster-example-pipeline - Template Dagster repo using poetry and a single Docker container; works well with CICD
GPflow - Gaussian processes in TensorFlow
flyte - Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.
text-generation-inference - Large Language Model Text Generation Inference
SmartSim - SmartSim Infrastructure Library.
metaflow - :rocket: Build and manage real-life ML, AI, and data science projects with ease!
phidata - Build AI Assistants with memory, knowledge and tools.
postgresml - The GPU-powered AI application database. Get your app to market faster using the simplicity of SQL and the latest NLP, ML + LLM models.
ploomber - The fastest ⚡️ way to build data pipelines. Develop iteratively, deploy anywhere. ☁️
inference-benchmark - Benchmark for machine learning model online serving (LLM, embedding, Stable-Diffusion, Whisper)