Deploying Llama2 with vLLM vs TGI. Need advice

This page summarizes the projects mentioned and recommended in the original post on /r/LocalLLaMA

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • vllm

    A high-throughput and memory-efficient inference and serving engine for LLMs

  • I've been experimenting with deploying a model using two platforms: vLLM and TGI. While using the standard fp16 version, both platforms perform fairly comparably. However, I observed a significant performance gap when deploying the GPTQ 4bits version on TGI as opposed to vLLM.

  • text-generation-inference

    Large Language Model Text Generation Inference

  • I've been experimenting with deploying a model using two platforms: vLLM and TGI. While using the standard fp16 version, both platforms perform fairly comparably. However, I observed a significant performance gap when deploying the GPTQ 4bits version on TGI as opposed to vLLM.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • vllm-gptq

    A high-throughput and memory-efficient inference and serving engine for LLMs

  • The models are TheBloke/Llama2-7B-fp16 and TheBloke/Llama2-7B-GPTQ. I'm using 1000 prompts with a request rate (number of requests per second) of 10. By default, vLLM does not support for GPTQ, so I'm using this version: vLLM-GPTQ.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Hugging Face reverts the license back to Apache 2.0

    1 project | news.ycombinator.com | 8 Apr 2024
  • AI Code assistant for about 50-70 users

    4 projects | /r/LocalLLaMA | 6 Dec 2023
  • Continuous batch enables 23x throughput in LLM inference and reduce p50 latency

    1 project | news.ycombinator.com | 15 Aug 2023
  • HuggingFace Text Generation License No Longer Open-Source

    3 projects | news.ycombinator.com | 29 Jul 2023
  • HuggingFace Text Generation Library License Changed from Apache 2 to Hfoil

    1 project | news.ycombinator.com | 28 Jul 2023