Our great sponsors
-
storium-backend
Source code for the web backend for hosting story generation models in the EMNLP 2020 paper "STORIUM: A Dataset and Evaluation Platform for Human-in-the-Loop Story Generation"
If you’re willing to roll your own, you can see an example from my latest research project that makes use of asyncio.
-
server
The Triton Inference Server provides an optimized cloud and edge inferencing solution. (by triton-inference-server)
I've seen this called "dynamic batching" most places at work. Nvidia has Triton Inference server which works fine for us. I'd say likely you'll get more speedup from dymamic batching on GPU than CPU depending on model architecture. The overall structure probably looks something like one inference thread, then when requests come in (from many threads) you add them to your queue, and when the queue is full or The oldest enqueued request times out, you construct your batch then run inference
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
Related posts
- Is there any open source app to load a model and expose API like OpenAI?
- best way to serve llama V2 (llama.cpp VS triton VS HF text generation inference)
- [R] Wordcraft: a Human-AI Collaborative Editor for Story Writing
- [D] Very long sequence data (books) understanding?
- [P] Question about generating stories