mosec
BentoML
mosec | BentoML | |
---|---|---|
11 | 16 | |
707 | 6,558 | |
1.4% | 1.8% | |
8.5 | 9.8 | |
3 days ago | 3 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mosec
-
20x Faster as the Beginning: Introducing pgvecto.rs extension written in Rust
Mosec - A high-performance serving framework for ML models, offers dynamic batching and CPU/GPU pipelines to fully exploit your compute machine. Simple and faster alternative to NVIDIA Triton.
-
[D] Handling Concurrent Request for ML Model API
- Yes C++ would be better, but you can try mosec. It has a Python interface and helps you handle all the difficult things about Python multiprocessing. The web service part is implemented in Rust thus it's fast enough for machine learning services.
-
Launching ModelZ Beta!
Contribute to open source projects: Modelz is built on top of envd, mosec, modelz-llm and many other open source projects. If you're interested in contributing to these projects, you can check out their GitHub repositories and start contributing.
-
Deploying a model with an API in docker
You could first create the image with the framework you like (e.g. bentoml or https://github.com/mosecorg/mosec for light weight).
- PostgresML is 8-40x faster than Python HTTP microservices
- Python Machine Learning Service Can Run Way More Faster
-
[D] Open Source ML Organisations to contribute to?
If you're interested in machine learning model serving, can check mosec: https://github.com/mosecorg/mosec
-
Why not multiprocessing
During the development of a machine learning serving project Mosec, I used a lot of multiprocessing to make it more efficient. I want to share some experiences and some researches related to Python multiprocessing.
-
[P] Mosec: deploy your machine learning model in an easy and efficient way
That's a good example. I have met the same situation before. I have created a discussion in GitHub to track the DAG progress.
- Mosec: deploy your machine learning model in an easy and efficient way
BentoML
-
Who's hiring developer advocates? (December 2023)
Link to GitHub -->
-
project ideas/advice for entry-level grad jobs?
there are a few tools you can use as "cheat mode" shortcuts to give you a leg up as you're getting started. here's one: https://github.com/bentoml/BentoML
-
Two high schoolers trying to use Azure/GCP/AWS- need help!
Then you can look into bentoml https://github.com/bentoml/BentoML which is used to deploy ml stuff with many more benifits.
- Ask HN: Who is hiring? (November 2022)
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For 2), I am aware of a few options. Triton inference server is an obvious one as is the ‘transformer-deploy’ version from LDS. My only reservation here is that they require the model compilation or are architecture specific. I am aware of others like Bento, Ray serving and TorchServe. Ideally I would have something that allows any (PyTorch model) to be used without the extra compilation effort (or at least optionally) and has some convenience things like ease of use, easy to deploy, easy to host multiple models and can perform some dynamic batching. Anyway, I am really interested to hear people's experience here as I know there are now quite a few options! Any help is appreciated! Disclaimer - I have no affiliation or are connected in any way with the libraries or companies listed here. These are just the ones I know of. Thanks in advance.
- PostgresML is 8-40x faster than Python HTTP microservices
- Congratulations on v1.0, BentoML 🍱 ! You are r/mlops OSS of the month!
-
Show HN: Truss – serve any ML model, anywhere, without boilerplate code
In this category I’m a big fan of https://github.com/bentoml/BentoML
What I like about it is their idiomatic developer experience. It reminds me of other Pythonic frameworks like Flask and Django in a good way.
I have no affiliation with them whatsoever, just an admirer.
-
[P] Introducing BentoML 1.0 - A faster way to ship your models to production
Github Page: https://github.com/bentoml/BentoML
- Show HN: BentoML goes 1.0 – A faster way to ship your models to production
What are some alternatives?
GPflow - Gaussian processes in TensorFlow
fastapi - FastAPI framework, high performance, easy to learn, fast to code, ready for production
mlrun - MLRun is an open source MLOps platform for quickly building and managing continuous ML applications across their lifecycle. MLRun integrates into your development and CI/CD environment and automates the delivery of production data, ML pipelines, and online applications.
seldon-core - An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models
text-generation-inference - Large Language Model Text Generation Inference
haystack - :mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
metaflow - :rocket: Build and manage real-life ML, AI, and data science projects with ease!
clearml - ClearML - Auto-Magical CI/CD to streamline your AI workload. Experiment Management, Data Management, Pipeline, Orchestration, Scheduling & Serving in one MLOps/LLMOps solution
postgresml - The GPU-powered AI application database. Get your app to market faster using the simplicity of SQL and the latest NLP, ML + LLM models.
Kedro - Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.
inference-benchmark - Benchmark for machine learning model online serving (LLM, embedding, Stable-Diffusion, Whisper)
kubeflow - Machine Learning Toolkit for Kubernetes