mosec
inference-benchmark
mosec | inference-benchmark | |
---|---|---|
11 | 1 | |
707 | 26 | |
1.4% | - | |
8.5 | 6.4 | |
3 days ago | 10 months ago | |
Python | Python | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mosec
-
20x Faster as the Beginning: Introducing pgvecto.rs extension written in Rust
Mosec - A high-performance serving framework for ML models, offers dynamic batching and CPU/GPU pipelines to fully exploit your compute machine. Simple and faster alternative to NVIDIA Triton.
-
[D] Handling Concurrent Request for ML Model API
- Yes C++ would be better, but you can try mosec. It has a Python interface and helps you handle all the difficult things about Python multiprocessing. The web service part is implemented in Rust thus it's fast enough for machine learning services.
-
Launching ModelZ Beta!
Contribute to open source projects: Modelz is built on top of envd, mosec, modelz-llm and many other open source projects. If you're interested in contributing to these projects, you can check out their GitHub repositories and start contributing.
-
Deploying a model with an API in docker
You could first create the image with the framework you like (e.g. bentoml or https://github.com/mosecorg/mosec for light weight).
- PostgresML is 8-40x faster than Python HTTP microservices
- Python Machine Learning Service Can Run Way More Faster
-
[D] Open Source ML Organisations to contribute to?
If you're interested in machine learning model serving, can check mosec: https://github.com/mosecorg/mosec
-
Why not multiprocessing
During the development of a machine learning serving project Mosec, I used a lot of multiprocessing to make it more efficient. I want to share some experiences and some researches related to Python multiprocessing.
-
[P] Mosec: deploy your machine learning model in an easy and efficient way
That's a good example. I have met the same situation before. I have created a discussion in GitHub to track the DAG progress.
- Mosec: deploy your machine learning model in an easy and efficient way
inference-benchmark
-
[D] Handling Concurrent Request for ML Model API
I have done some benchmarks before: https://github.com/tensorchord/inference-benchmark
What are some alternatives?
BentoML - The most flexible way to serve AI/ML models in production - Build Model Inference Service, LLM APIs, Inference Graph/Pipelines, Compound AI systems, Multi-Modal, RAG as a Service, and more!
agentchain - Chain together LLMs for reasoning & orchestrate multiple large models for accomplishing complex tasks
GPflow - Gaussian processes in TensorFlow
ChatFred - Alfred workflow using ChatGPT, DALLĀ·E 2 and other models for chatting, image generation and more.
mlrun - MLRun is an open source MLOps platform for quickly building and managing continuous ML applications across their lifecycle. MLRun integrates into your development and CI/CD environment and automates the delivery of production data, ML pipelines, and online applications.
text-generation-inference - Large Language Model Text Generation Inference
inference - Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
metaflow - :rocket: Build and manage real-life ML, AI, and data science projects with ease!
truss - The simplest way to serve AI/ML models in production
postgresml - The GPU-powered AI application database. Get your app to market faster using the simplicity of SQL and the latest NLP, ML + LLM models.
gunicorn - gunicorn 'Green Unicorn' is a WSGI HTTP Server for UNIX, fast clients and sleepy applications.