[D] Deploying ML models - batching

This page summarizes the projects mentioned and recommended in the original post on /r/MachineLearning

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • storium-backend

    Source code for the web backend for hosting story generation models in the EMNLP 2020 paper "STORIUM: A Dataset and Evaluation Platform for Human-in-the-Loop Story Generation"

    If you’re willing to roll your own, you can see an example from my latest research project that makes use of asyncio.

  • server

    The Triton Inference Server provides an optimized cloud and edge inferencing solution. (by triton-inference-server)

    I've seen this called "dynamic batching" most places at work. Nvidia has Triton Inference server which works fine for us. I'd say likely you'll get more speedup from dymamic batching on GPU than CPU depending on model architecture. The overall structure probably looks something like one inference thread, then when requests come in (from many threads) you add them to your queue, and when the queue is full or The oldest enqueued request times out, you construct your batch then run inference

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts