YetAnotherChatUI
tensorrtllm_backend
YetAnotherChatUI | tensorrtllm_backend | |
---|---|---|
1 | 3 | |
1 | 551 | |
- | 13.2% | |
8.2 | 8.0 | |
about 1 month ago | 11 days ago | |
HTML | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
YetAnotherChatUI
-
Ollama releases OpenAI API compatibility
I had trouble installing Ollama last time I tried, I'm going to try again tomorrow.
I've already got a web UI that "should" work with anything that matches OpenAI's chat API, though I'm sure everyone here knows how reliable air-quotes like that are when a developer says them.
https://github.com/BenWheatley/YetAnotherChatUI
tensorrtllm_backend
-
How fast can one reasonably expect to get inference on a ~70B model?
TensorRT-LLM with Triton Inference Server is the fastest in Nvidia land.
https://github.com/triton-inference-server/tensorrtllm_backe...
-
Ollama releases OpenAI API compatibility
Nvidia Triton Inference Server with the TensorRT-LLM backend:
https://github.com/triton-inference-server/tensorrtllm_backe...
It’s used by Mistral, AWS, Cloudflare, and countless others.
vLLM, HF TGI, Rayserve, etc are certainly viable but Triton has many truly unique and very powerful features (not to mention performance).
100k DAU doesn’t mean much, you’d need to get a better understanding of the application, input tokens, generated output tokens, request rates, peaks, etc not to mention required time to first token, tokens per second, etc.
Anyway, the point is Triton is just about the only thing out there for use in this general range and up.
-
MK1 Flywheel Unlocks the Full Potential of AMD Instinct for LLM Inference
I support any progress to erode the Nvidia monopoly.
That said from what I'm seeing here the free and open source (less other aspects of the CUDA stack, of course) TensorRT-LLM[0] almost certainly bests this implementation using the Nvidia hardware they reference for comparison.
I don't have an A6000 but as an example with the tensorrt_llm backend for Nvidia Triton Inference Server (also free and open source) I get roughly 30 req/s with Mistral 7B on my RTX 4090 with significantly lower latency. Comparison benchmarks are tough, especially when published benchmarks like these are fairly scant on the real details.
TensorRT-LLM has only been public for a few months and if you peruse the docs, PRs, etc you'll see they have many more optimizations in the works.
In typical Nvidia fashion TensorRT-LLM runs on any Nvidia card (from laptop to datacenter) going back to Turing (five year old cards) assuming you have the VRAM.
You can download and run this today, free and "open source" for these implementations at least. I'm extremely skeptical of the claim "MK1 Flywheel has the Best Throughput and Latency for LLM Inference on NVIDIA". You'll note they compare to vLLM, which is an excellent and incredible project but if you look at vLLM vs Triton w/ TensorRT-LLM the performance improvements are dramatic.
Of course it's the latest and greatest ($$$$$$ and unobtanium) but one look at H100/H200 performance[3] and you can see what happens when the vendor has a robust software ecosystem to help sell their hardware. Pay the Nvidia tax on the frontend for the hardware, get it back as a dividend on the software.
I feel like MK1 must be aware of TensorRT-LLM but of course those comparison benchmarks won't help sell their startup.
[0] - https://github.com/NVIDIA/TensorRT-LLM
[1] - https://github.com/triton-inference-server/tensorrtllm_backe...
[2] - https://mkone.ai/blog/mk1-flywheel-race-tuned-and-track-read...
[3] - https://github.com/NVIDIA/TensorRT-LLM/blob/main/docs/source...
What are some alternatives?
model_navigator - Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.
dali_backend - The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.
client - Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.
tensorrtllm_backe
lookma - LookMa connects Android devices to locally-run LLMs
llamafile - Distribute and run LLMs with a single file.
llama.cpp - LLM inference in C/C++
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.