-
FlexLLMGen
Discontinued Running large language models on a single GPU for throughput-oriented scenarios.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
There is already an implementation along the same line using the torrent architecture.
https://petals.dev/
-
nnl
a low-latency and high-performance inference engine for large models on low-memory GPU platform.
I did roughly the same thing in one of my hobby project https://github.com/fengwang/nnl. But in stead of using SSD, I load all the weights to the host memory, and while inferencing the model layer by layer, I asynchronously copy memory from global to shared memory in the hope of better performance. However, my approach is bounded by the PCI-E bandwidth.