[D] Is there an efficient way to make inferences with open-source LLM?

This page summarizes the projects mentioned and recommended in the original post on /r/MachineLearning

Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
  • mpt-30B-inference

    Run inference on MPT-30B using CPU

  • 4-bit. I've used this implementation: https://github.com/abacaj/mpt-30B-inference/tree/main

  • vllm

    A high-throughput and memory-efficient inference and serving engine for LLMs

  • I found vLLM to work pretty well. Gives a nice boost. It doesn't support MPT yet although you can try to add it: https://github.com/vllm-project/vllm. There's exllama for running quantized models: https://github.com/turboderp/exllama. You can also try TGI: https://github.com/huggingface/text-generation-inference

  • Scout Monitoring

    Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.

    Scout Monitoring logo
  • exllama

    A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.

  • I found vLLM to work pretty well. Gives a nice boost. It doesn't support MPT yet although you can try to add it: https://github.com/vllm-project/vllm. There's exllama for running quantized models: https://github.com/turboderp/exllama. You can also try TGI: https://github.com/huggingface/text-generation-inference

  • text-generation-inference

    Large Language Model Text Generation Inference

  • I found vLLM to work pretty well. Gives a nice boost. It doesn't support MPT yet although you can try to add it: https://github.com/vllm-project/vllm. There's exllama for running quantized models: https://github.com/turboderp/exllama. You can also try TGI: https://github.com/huggingface/text-generation-inference

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Hugging Face reverts the license back to Apache 2.0

    1 project | news.ycombinator.com | 8 Apr 2024
  • AI Code assistant for about 50-70 users

    4 projects | /r/LocalLLaMA | 6 Dec 2023
  • Deploying Llama2 with vLLM vs TGI. Need advice

    3 projects | /r/LocalLLaMA | 14 Sep 2023
  • Continuous batch enables 23x throughput in LLM inference and reduce p50 latency

    1 project | news.ycombinator.com | 15 Aug 2023
  • HuggingFace Text Generation License No Longer Open-Source

    3 projects | news.ycombinator.com | 29 Jul 2023