Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Lmdeploy Alternatives
Similar projects and alternatives to lmdeploy
-
text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
FLiPStackWeekly
FLaNK AI Weekly covering Apache NiFi, Apache Flink, Apache Kafka, Apache Spark, Apache Iceberg, Apache Ozone, Apache Pulsar, and more...
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
-
AdaptiveCpp
Implementation of SYCL and C++ standard parallelism for CPUs and GPUs from all vendors: The independent, community-driven compiler for C++-based heterogeneous programming models. Lets applications adapt themselves to all the hardware in the system - even at runtime!
-
examples
Analyze the unstructured data with Towhee, such as reverse image search, reverse video search, audio classification, question and answer systems, molecular search, etc. (by towhee-io)
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
lmdeploy reviews and mentions
- FLaNK-AIM Weekly 06 May 2024
-
AMD May Get Across the CUDA Moat
I wouldn’t say ROCm code is “slower”, per se, but in practice that’s how it presents. References:
https://github.com/InternLM/lmdeploy
https://github.com/vllm-project/vllm
https://github.com/OpenNMT/CTranslate2
You know what’s missing from all of these and many more like them? Support for ROCm. This is all before you get to the really wildly performant stuff like Triton Inference Server, FasterTransformer, TensorRT-LLM, etc.
ROCm is at the “get it to work stage” (see top comment, blog posts everywhere celebrating minor successes, etc). CUDA is at the “wring every last penny of performance out of this thing” stage.
In terms of hardware support, I think that one is obvious. The U in CUDA originally stood for unified. Look at the list of chips supported by Nvidia drivers and CUDA releases. Literally anything from at least the past 10 years that has Nvidia printed on the box will just run CUDA code.
One of my projects specifically targets Pascal up - when I thought even Pascal was a stretch. Cue my surprise when I got a report of someone casually firing it up on Maxwell when I was pretty certain there was no way it could work.
A Maxwell laptop chip. It also runs just as well on an H100.
THAT is hardware support.
-
Nvidia Introduces TensorRT-LLM for Accelerating LLM Inference on H100/A100 GPUs
vLLM has healthy competition. Not affiliated but try lmdeploy:
https://github.com/InternLM/lmdeploy
In my testing it’s significantly faster and more memory efficient than vLLM when configured with AWQ int4 and int8 KV cache.
If you look at the PRs, issues, etc you’ll see there are many more optimizations in the works. That said there are also PRs and issues for some of the lmdeploy tricks in vllm as well (AWQ, Triton Inference Server, etc).
I’m really excited to see where these projects go!
- Meta: Code Llama, an AI Tool for Coding
-
A note from our sponsor - InfluxDB
www.influxdata.com | 8 May 2024
Stats
InternLM/lmdeploy is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of lmdeploy is Python.
Sponsored