model-serving

Top 23 model-serving Open-Source Projects

  • vllm

    A high-throughput and memory-efficient inference and serving engine for LLMs

  • Project mention: AI leaderboards are no longer useful. It's time to switch to Pareto curves | news.ycombinator.com | 2024-04-30

    I guess the root cause of my claim is that OpenAI won't tell us whether or not GPT-3.5 is an MoE model, and I assumed it wasn't. Since GPT-3.5 is clearly nondeterministic at temp=0, I believed the nondeterminism was due to FPU stuff, and this effect was amplified with GPT-4's MoE. But if GPT-3.5 is also MoE then that's just wrong.

    What makes this especially tricky is that small models are truly 100% deterministic at temp=0 because the relative likelihoods are too coarse for FPU issues to be a factor. I had thought 3.5 was big enough that some of its token probabilities were too fine-grained for the FPU. But that's probably wrong.

    On the other hand, it's not just GPT, there are currently floating-point difficulties in vllm which significantly affect the determinism of any model run on it: https://github.com/vllm-project/vllm/issues/966 Note that a suggested fix is upcasting to float32. So it's possible that GPT-3.5 is using an especially low-precision float and introducing nondeterminism by saving money on compute costs.

    Sadly I do not have the money[1] to actually run a test to falsify any of this. It seems like this would be a good little research project.

    [1] Or the time, or the motivation :) But this stuff is expensive.

  • BentoML

    The most flexible way to serve AI/ML models in production - Build Model Inference Service, LLM APIs, Inference Graph/Pipelines, Compound AI systems, Multi-Modal, RAG as a Service, and more!

  • Project mention: Who's hiring developer advocates? (December 2023) | dev.to | 2023-12-04

    Link to GitHub -->

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • kserve

    Standardized Serverless ML Inference Platform on Kubernetes

  • lightllm

    LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance.

  • Project mention: FLaNK Weekly 31 December 2023 | dev.to | 2023-12-31
  • aici

    AICI: Prompts as (Wasm) Programs

  • Project mention: Google Gemini: Context Caching | news.ycombinator.com | 2024-05-16

    To me, context caching is only a subset of what is possible with full control over the model. I consider this a more complete list: https://github.com/microsoft/aici?tab=readme-ov-file#flexibi...

    Context caching only gets you “forking generation into multiple branches” (i.e. sharing work between multiple generations)

  • mlrun

    MLRun is an open source MLOps platform for quickly building and managing continuous ML applications across their lifecycle. MLRun integrates into your development and CI/CD environment and automates the delivery of production data, ML pipelines, and online applications.

  • hopsworks

    Hopsworks - Data-Intensive AI platform with a Feature Store

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
  • functime

    Time-series machine learning at scale. Built with Polars for embarrassingly parallel feature extraction and forecasts on panel data.

  • Project mention: functime: NEW Data - star count:616.0 | /r/algoprojects | 2023-11-08
  • truss

    The simplest way to serve AI/ML models in production (by basetenlabs)

  • Yatai

    Model Deployment at Scale on Kubernetes 🦄️

  • mosec

    A high-performance ML model serving framework, offers dynamic batching and CPU/GPU pipelines to fully exploit your compute machine

  • Project mention: 20x Faster as the Beginning: Introducing pgvecto.rs extension written in Rust | dev.to | 2023-08-06

    Mosec - A high-performance serving framework for ML models, offers dynamic batching and CPU/GPU pipelines to fully exploit your compute machine. Simple and faster alternative to NVIDIA Triton.

  • pinferencia

    Python + Inference - Model Deployment library in Python. Simplest model inference server ever.

  • OneDiffusion

    OneDiffusion: Run any Stable Diffusion models and fine-tuned weights with ease

  • Project mention: OneDiffusion | news.ycombinator.com | 2023-08-22
  • chitra

    A multi-functional library for full-stack Deep Learning. Simplifies Model Building, API development, and Model Deployment.

  • serving-pytorch-models

    Serving PyTorch models with TorchServe :fire:

  • inferencedb

    🚀 Stream inferences of real-time ML models in production to any data lake (Experimental)

  • vllm-rocm

    vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs

  • Project mention: Experimental Mixtral MoE on vLLM! | /r/LocalLLaMA | 2023-12-10
  • Drogon-torch-serve

    Serve pytorch / torch models using Drogon

  • sdk-python

    Python library for Modzy Machine Learning Operations (MLOps) Platform (by modzy)

  • sdk-javascript

    The official JavaScript SDK for the Modzy Machine Learning Operations (MLOps) Platform.

  • deprecated-core

    🔮 Instill Core contains components for supporting Instill VDP and Instill Model

  • Project mention: Building an Instill AI Pipeline in 5 minutes | dev.to | 2023-10-22

    Step 1: Log in to your InstillAI Cloud account. If you don't have an account yet, you can create one here for free using your Email or Google or GitHub ID.

  • TFServing-Demos

    TF Serving demos

  • MLDrop

    MLDrop model serving for Pytorch

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020).

model-serving related posts

  • Experimental Mixtral MoE on vLLM!

    2 projects | /r/LocalLLaMA | 10 Dec 2023
  • Who's hiring developer advocates? (December 2023)

    4 projects | dev.to | 4 Dec 2023
  • functime: NEW Data - star count:616.0

    1 project | /r/algoprojects | 8 Nov 2023
  • functime: NEW Data - star count:601.0

    1 project | /r/algoprojects | 22 Oct 2023
  • functime: NEW Data - star count:601.0

    1 project | /r/algoprojects | 21 Oct 2023
  • functime: NEW Data - star count:601.0

    1 project | /r/algoprojects | 20 Oct 2023
  • 20x Faster as the Beginning: Introducing pgvecto.rs extension written in Rust

    6 projects | dev.to | 6 Aug 2023
  • A note from our sponsor - InfluxDB
    www.influxdata.com | 17 May 2024
    Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →

Index

What are some of the best open-source model-serving projects? This list will help you:

Project Stars
1 vllm 19,344
2 BentoML 6,603
3 kserve 3,111
4 lightllm 1,856
5 aici 1,756
6 mlrun 1,316
7 hopsworks 1,086
8 functime 923
9 truss 838
10 Yatai 766
11 mosec 712
12 pinferencia 558
13 OneDiffusion 323
14 chitra 224
15 serving-pytorch-models 100
16 inferencedb 77
17 vllm-rocm 76
18 Drogon-torch-serve 26
19 sdk-python 24
20 sdk-javascript 16
21 deprecated-core 13
22 TFServing-Demos 11
23 MLDrop 3

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com